Activation Layers
Layer |
Description |
---|---|
Exponential linear unit |
|
Output the input tensor |
|
Leaky relu |
|
Logarithm of softmax function |
|
Rectified linear unit |
|
Softmax |
Elu
The Elu
layer is similar to Relu
but with negative
values that cause the mean of the Elu
activation function to
shift toward 0.
\(\alpha\) should be non-negative. See:
Djork-Arne Clevert, Thomas Unterthiner, and Sepp Hochreiter. “Fast and accurate deep network learning by exponential linear units (ELUs).” arXiv preprint arXiv:1511.07289 (2015).
Arguments:
- alpha
(
double
, optional) Default: 1. Should be >=0
Identity
The Identity
layer outputs the input tensor.
This layer is very cheap since it just involves setting up tensor views.
Arguments: None
LeakyRelu
LeakyRelu
modifies theRelu
function to allow fora small, non-zero gradient when the unit is saturated and not active.
See:
Andrew L. Maas, Awni Y. Hannun, and Andrew Y. Ng. “Rectifier nonlinearities improve neural network acoustic models.” In Proc. ICML, vol. 30, no. 1, p. 3. 2013.
Arguments:
- negative_slope
(
double
, optional) Default: 0.01
LogSoftmax
LogSoftmax
is the logarithm of the softmax function.
Arguments: None
Relu
The Relu
layer outputs input directly if positive, otherwise
outputs zero.
Arguments: None
Softmax
The Softmax
layer turns a vector of K real values into a
vector of K real values that sum to 1.
Arguments:
- softmax_mode
(
string
, optional) Options: instance (default), channel