Loss Layers
Layer |
Description |
---|---|
0-1 loss function |
|
Cross entropy between probability vectors |
|
L1 vector norm |
|
Square of L2 vector norm |
|
Mean absolute error |
|
Mean squared error |
|
Top-k prediction scores |
CategoricalAccuracy
The CategoricalAccuracy
Layer is a 0-1 loss function.
Requires two inputs, which are respectively interpreted as prediction scores and as a one-hot label vector. The output is one if the top entries in both inputs are in the same position and is otherwise zero. Ties are broken in favor of entries with smaller indices.
This is primarily intended for use as a metric since it is not differentiable.
Arguments: None
CrossEntropy
The CrossEntropy
layer measures the probability and error
between vectors.
Given a predicted distribution \(y\) and ground truth distribution \(\hat{y}\),
Arguments:
- use_labels
(
bool
) Advanced option for distconv
L1Norm
The L1Norm
layer is the L1 norm of a vector.
Arguments: None
L2Norm2
The L2Norm2
layer is the square of L2 vector norm.
Arguments: None
MeanAbsoluteError
The MeanAbsoluteError
given a prediction \(y\) and
ground truth \(\hat{y}\):
Arguments: None
MeanSquaredError
The MeanSquaredError
layer given a prediction \(y\) and
ground truth \(\hat{y}\):
Arguments: None
TopKCategoricalAccuracy
The TopKCategoricalAccuracy
layer requires two inputs, which
are respectively interpreted as prediction scores and as a one-hot
label vector. The output is one if the corresponding label matches one
of the top-k prediction scores and is otherwise zero. Ties in the
top-k prediction scores are broken in favor of entries with smaller
indices.
Arguments:
- k
(
int64
)