Transform Layers
Layer |
Description |
---|---|
Sum of tensor entries over batch dimension |
|
Random tensor with Bernoulli distribution |
|
Concatenate tensors along specified dimension |
|
Output tensor filled with a single value |
|
Extract crop from tensor at a position |
|
Add tensors over multiple sub-grids |
|
Add tensors over multiple sub-grids and slice |
|
Placeholder layer with no child layers |
|
Interface with objective function and metrics |
|
Gather values from specified tensor indices |
|
Random tensor with Gaussian/normal distribution |
|
Entry-wise tensor product |
|
Identity/zero function if layer is unfrozen/frozen. |
|
One-hot vector indicating top-k entries |
|
Traverses the spatial dimensions of a data tensor with a sliding window and applies a reduction operation |
|
Reduce tensor to scalar |
|
Reinterpret tensor with new dimensions |
|
Scatter values to specified tensor indices |
|
Slice tensor along specified dimension |
|
Sort tensor entries |
|
Output the input tensor to multiple child layers |
|
Block error signals during back propagation |
|
Add multiple tensors |
|
Permute the indices of a tensor |
|
Repeat a tensor until it matches specified dimensions |
|
Random tensor with uniform distribution |
|
Transpose of pooling layer |
|
Add tensors with scaling factors |
|
Output values from a weights tensor |
Deprecated transform layers:
Layer |
Description |
---|---|
Deprecated |
|
Deprecated |
BatchwiseReduceSum
The BatchwiseReduceSum layer is the sum of tensor entries over batch dimension. The output tensor has same shape as input tensor.
Arguments: None
IdentityZero
The IdentityZero
layer is an output tensor filled with
either zeros or ones depending on if the layer is frozen or not. This
is useful for more complex training setups like GANs, where you want
to reuse the computational graph but switch loss functions.
Arguments:
- num_neurons
(
string
) Tensor dimensionsList of integers
Bernoulli
The Bernoulli
layer is a random tensor with a Bernoulli
distribution. Randomness is only applied during training. The tensor
is filled with zeros during evaluation.
Arguments:
- prob
(
double
) Bernoulli distribution probability- neuron_dims
(
string
) Tensor dimensionsList of integers
Concatenation
The Concatenation
layer concatenates tensors along specified
dimensions. All input tensors must have identical dimensions, except
for the concatenation dimension.
Arguments:
- axis
(
int64
) Tensor dimension to concatenate along
Constant
The Constant
layer is an output tensor filled with a single
value.
Arguments:
- value
(
double
) Value of tensor entries- num_neurons
(
string
) Tensor dimensionsList of integers
Crop
The Crop
layer extracts a crop from a tensor at a
position. It expects two input tensors: an \(N\) -D data tensor
and a 1D position vector with \(N\) entries. The position vector
should be normalized so that values are in \([0,1]\) . For images
in CHW format, a position of (0,0,0) corresponds to the red-top-left
corner and (1,1,1) to the blue-bottom-right corner.
Arguments:
- dims
(
string
) Crop dimensions List of integers
Cross_Grid_Sum
The Cross_Grid_Sum
layer adds tensors over multiple
sub-grids. This is experimental functionality for use with sub-grid
parallelism.
Arguments: None
Cross_Grid_Sum_Slice
The Cross_Grid_Sum_Slice
layer adds tensors over multiple
sub-grids and slices. This is experimental functionality for use with
sub-grid parallelism.
Arguments: None
Dummy
The Dummy
layer is a placeholder layer with no child
layers. Rarely needed by users. This layer is used internally to
handle cases where a layer has no child layers.
Arguments: None
Evaluation
The Evaluation
layer is an interface with objective function
and metrics. Rarely needed by users. Evaluation layers are
automatically created when needed in the compute graph.
Arguments: None
Gather
The Gather
layer gathers values from specified tensor
indices. Expects two input tensors: an \(N\) -D data tensor and a
1D index vector. For 1D data:
If an index is out-of-range, the corresponding output is set to zero.
For higher-dimensional data, the layer performs a gather along one dimension. For example, with 2D data and axis=1,
Currently, only 1D and 2D data is supported.
The size of the the output tensor along the gather dimension is equal to the size of the index vector. The remaining dimensions of the output tensor are identical to the data tensor.
Todo
Support higher-dimensional data
Arguments:
- axis
(
google.protobuf.UInt64Value
) Dimensions to gather along
Gaussian
The Gaussian
layer is a random tensor with Gaussian/normal
distribution.
Arguments:
- mean
(
double
) Distribution mean- stdev
(
double
) Distribution standard deviation- neuron_dims
(
string
) Tensor dimensionsList of integers
- training_only
(
bool
) Only generate random values during trainingIf true, the tensor is filled with the distribution mean during evaluation.
Hadamard
The Hadamard
layer is an entry-wise tensor product.
Arguments: None
InTopK
The InTopK
layer is a one-hot vector indicating top-k
entries. Output tensor has same dimensions as input tensor. Output
entries corresponding to the top-k input entries are set to one and
the rest to zero. Ties are broken in favor of entries with smaller
indices.
Arguments:
- k
(
int64
) Number of non-zeros in one-hot vector
Pooling
The Pooling
layer traverses the spatial dimensions of a data
tensor with a sliding window and applies a reduction operation.
Arguments:
- pool_mode
(
string
, optional) Pooling operationOptions: max, average, average_no_pad
- num_dims
(
int64
) Number of spatial dimensionsThe first data dimension is treated as the channel dimension, and all others are treated as spatial dimensions (recall that the mini-batch dimension is implicit).
- has_vectors
(
bool
) Whether to use vector-valued optionsIf true, then the pooling is configured with
pool_dims
,pool_pads
,pool_strides
. Otherwise,pool_dims_i
,pool_pads_i
,pool_strides_i
.- pool_dims
(
string
) Pooling window dimensions (vector-valued)List of integers, one for each spatial dimension. Used when
has_vectors
is enabled.- pool_pads
(
string
) Pooling padding (vector-valued)List of integers, one for each spatial dimension. Used when
has_vectors
is enabled.- pool_strides
(
string
) Pooling strides (vector-valued)List of integers, one for each spatial dimension. Used when
has_vectors
is enabled.- pool_dims_i
(
int64
) Pooling window dimension (integer-valued)Used when
has_vectors
is disabled.- pool_pads_i
(
int64
) Pooling padding (integer-valued)Used when
has_vectors
is disabled.- pool_strides_i
(
int64
) Pooling stride (integer-valued)Used when
has_vectors
is disabled.
Reduction
The Reduction
layer reduces a tensor to a scalar.
Arguments:
- mode
(
string
, optional) Reduction operationOptions: sum (default) or mean
Reshape
The Reshape
layer reinterprets a tensor with new dimensions.
The input and output tensors must have the same number of entries. This layer is very cheap since it just involves setting up tensor views.
Arguments:
- dims
(
string
) Tensor dimensionsList of integers. A single dimension may be -1, in which case the dimension is inferred.
Deprecated and unused arguments:
- num_dims
(
int64
)
Scatter
The Scatter
layer scatters values to specified tensor
indices. Expects two input tensors: an \(N\) -D data tensor and a
1D index vector. For 1D data:
Out-of-range indices are ignored.
For higher-dimensional data, the layer performs a scatter along one dimension. For example, with 2D data and axis=1,
Currently, only 1D and 2D data is supported.
The size of the index vector must match the size of the data tensor along the scatter dimension.
Todo
Support higher-dimensional data
Arguments:
- dims
(
string
) Output tensor dimensionsList of integers. Number of dimensions must match data tensor.
- axis
(
google.protobuf.UInt64Value
) Dimension to scatter along
Slice
The Slice
layer slices a tensor along a specified
dimension. The tensor is split along one dimension at user-specified
points, and each child layer recieves one piece.
Arguments:
- axis
(
int64
) Tensor dimension to slice along- slice_points
(
string
) Positions at which to slice tensorList of integers. Slice points must be in ascending order and the number of slice points must be one greater than the number of child layers.
Deprecated arguments:
- get_slice_points_from_reader
(
string
) Do not use unless using the Jag dataset.
Sort
The Sort
layer sorts tensor entries.
Arguments:
- descending
(
bool
) Sort entries in descending order
Split
The Split
layer outputs the input tensor to multiple child
layers.
Rarely needed by users. This layer is used internally to handle cases where a layer outputs the same tensor to multiple child layers. From a usage perspective, there is little difference from an identity layer.
This is not to be confused with the split operation in NumPy, PyTorch or TensorFlow. The name refers to splits in the compute graph.
Arguments: None
StopGradient
The StopGradient
layer blocks error signals during back
propagation.
The output is identical to the input, but the back propagation output (i.e. the error signal) is always zero. Compare with the stop_gradient operation in TensorFlow and Keras. Note that this means that computed gradients in preceeding layers are not exact gradients of the objective function.
Arguments: None
Sum
The Sum
layer calculates the element-wise sum of each of the
input tensors.
Arguments: None
TensorPermute
The TensorPermute
layer permutes the indices of a tensor, similar
to a transposition.
It expects one input tensor of order N, and a length N array of
permuted indices [0..N-1], with respect to the input tensor
dimensions. Therefore, passing axes=[0,1,2]
for a rank-3 tensor
will invoke a copy.
At this time, only permutations are supported. Each index must be accounted for in the permuted array.
Arguments:
- axes
(
uint32
) Permuted tensor dimensionsList of integers
Tessellate
The Tessallate
layer repeats a tensor until it matches
specified dimensions.
The output tensor dimensions do not need to be integer multiples of
the input dimensions. Compare with the NumPy tile
function.
As an example, tessellating a \(2 \times 2\) matrix into a \(3 \times 4\) matrix looks like the following:
Arguments:
- dims
(
string
) Output tensor dimensionsList of integers
Uniform
The Uniform
layer is a random tensor with a uniform
distribution.
Arguments:
- min
(
double
) Distribution minimum- max
(
double
) Distribution maximum- neuron_dims
(
string
) Tensor dimensionsList of integers
- training_only
(
bool
) Only generate random values during trainingIf true, the tensor is filled with the distribution mean during evaluation.
Unpooling
The Unpooling
layer is the transpose of the pooling
layer. It is required that the pooling layer be set as the hint layer.
Warning
This has not been well maintained and is probably broken.
Todo
GPU support.
Arguments:
- num_dims
(
int64
) Number of spatial dimensionsThe first data dimension is treated as the channel dimension, and all others are treated as spatial dimensions (recall that the mini-batch dimension is implicit).
WeightedSum
The WeightedSum
layer adds tensors with scaling factors.
Arguments:
- scaling_factors
(
string
) List of floating-point numbers, one for each input tensor.
WeightsLayer
The WeightsLayer
outputs values from a weights
tensor. Interfaces with a weights
object.
Arguments:
- dims
(
string
) Weights tensor dimensionsList of integers
CategoricalRandom (Deprecated)
The CategoricalRandom
layer is deprecated.
Arguments: None
DiscreteRandom (Deprecated)
The DiscreteRandom
layer is deprecated.
Arguments:
- values
(
string
)- dims
(
string
)