Transform Layers

Layer

Description

BatchwiseReduceSum

Sum of tensor entries over batch dimension

Bernoulli

Random tensor with Bernoulli distribution

Concatenation

Concatenate tensors along specified dimension

Constant

Output tensor filled with a single value

Crop

Extract crop from tensor at a position

Cross_Grid_Sum

Add tensors over multiple sub-grids

Cross_Grid_Sum_Slice

Add tensors over multiple sub-grids and slice

Dummy

Placeholder layer with no child layers

Evaluation

Interface with objective function and metrics

Gather

Gather values from specified tensor indices

Gaussian

Random tensor with Gaussian/normal distribution

Hadamard

Entry-wise tensor product

IdentityZero

Identity/zero function if layer is unfrozen/frozen.

InTopK

One-hot vector indicating top-k entries

Pooling

Traverses the spatial dimensions of a data tensor with a sliding window and applies a reduction operation

Reduction

Reduce tensor to scalar

Reshape

Reinterpret tensor with new dimensions

Scatter

Scatter values to specified tensor indices

Slice

Slice tensor along specified dimension

Sort

Sort tensor entries

Split

Output the input tensor to multiple child layers

StopGradient

Block error signals during back propagation

Sum

Add multiple tensors

TensorPermute

Permute the indices of a tensor

Tessellate

Repeat a tensor until it matches specified dimensions

Uniform

Random tensor with uniform distribution

Unpooling

Transpose of pooling layer

WeightedSum

Add tensors with scaling factors

WeightsLayer

Output values from a weights tensor

Deprecated transform layers:

Layer

Description

CategoricalRandom (Deprecated)

Deprecated

DiscreteRandom (Deprecated)

Deprecated


BatchwiseReduceSum

The BatchwiseReduceSum layer is the sum of tensor entries over batch dimension. The output tensor has same shape as input tensor.

Arguments: None

Back to Top


IdentityZero

The IdentityZero layer is an output tensor filled with either zeros or ones depending on if the layer is frozen or not. This is useful for more complex training setups like GANs, where you want to reuse the computational graph but switch loss functions.

Arguments:

num_neurons

(string) Tensor dimensions

List of integers

Back to Top


Bernoulli

The Bernoulli layer is a random tensor with a Bernoulli distribution. Randomness is only applied during training. The tensor is filled with zeros during evaluation.

Arguments:

prob

(double) Bernoulli distribution probability

neuron_dims

(string) Tensor dimensions

List of integers

Back to Top


Concatenation

The Concatenation layer concatenates tensors along specified dimensions. All input tensors must have identical dimensions, except for the concatenation dimension.

Arguments:

axis

(int64) Tensor dimension to concatenate along

Back to Top


Constant

The Constant layer is an output tensor filled with a single value.

Arguments:

value

(double) Value of tensor entries

num_neurons

(string) Tensor dimensions

List of integers

Back to Top


Crop

The Crop layer extracts a crop from a tensor at a position. It expects two input tensors: an \(N\) -D data tensor and a 1D position vector with \(N\) entries. The position vector should be normalized so that values are in \([0,1]\) . For images in CHW format, a position of (0,0,0) corresponds to the red-top-left corner and (1,1,1) to the blue-bottom-right corner.

Arguments:

dims

(string) Crop dimensions List of integers

Back to Top


Cross_Grid_Sum

The Cross_Grid_Sum layer adds tensors over multiple sub-grids. This is experimental functionality for use with sub-grid parallelism.

Arguments: None

Back to Top


Cross_Grid_Sum_Slice

The Cross_Grid_Sum_Slice layer adds tensors over multiple sub-grids and slices. This is experimental functionality for use with sub-grid parallelism.

Arguments: None

Back to Top


Dummy

The Dummy layer is a placeholder layer with no child layers. Rarely needed by users. This layer is used internally to handle cases where a layer has no child layers.

Arguments: None

Back to Top


Evaluation

The Evaluation layer is an interface with objective function and metrics. Rarely needed by users. Evaluation layers are automatically created when needed in the compute graph.

Arguments: None

Back to Top


Gather

The Gather layer gathers values from specified tensor indices. Expects two input tensors: an \(N\) -D data tensor and a 1D index vector. For 1D data:

\[y[i] = x[\text{ind}[i]]\]

If an index is out-of-range, the corresponding output is set to zero.

For higher-dimensional data, the layer performs a gather along one dimension. For example, with 2D data and axis=1,

\[y[i,j] = x[i,\text{ind}[j]]\]

Currently, only 1D and 2D data is supported.

The size of the the output tensor along the gather dimension is equal to the size of the index vector. The remaining dimensions of the output tensor are identical to the data tensor.

Todo

Support higher-dimensional data

Arguments:

axis

(google.protobuf.UInt64Value) Dimensions to gather along

Back to Top


Gaussian

The Gaussian layer is a random tensor with Gaussian/normal distribution.

Arguments:

mean

(double) Distribution mean

stdev

(double) Distribution standard deviation

neuron_dims

(string) Tensor dimensions

List of integers

training_only

(bool) Only generate random values during training

If true, the tensor is filled with the distribution mean during evaluation.

Back to Top


Hadamard

The Hadamard layer is an entry-wise tensor product.

Arguments: None

Back to Top


InTopK

The InTopK layer is a one-hot vector indicating top-k entries. Output tensor has same dimensions as input tensor. Output entries corresponding to the top-k input entries are set to one and the rest to zero. Ties are broken in favor of entries with smaller indices.

Arguments:

k

(int64) Number of non-zeros in one-hot vector

Back to Top


Pooling

The Pooling layer traverses the spatial dimensions of a data tensor with a sliding window and applies a reduction operation.

Arguments:

pool_mode

(string, optional) Pooling operation

Options: max, average, average_no_pad

num_dims

(int64) Number of spatial dimensions

The first data dimension is treated as the channel dimension, and all others are treated as spatial dimensions (recall that the mini-batch dimension is implicit).

has_vectors

(bool) Whether to use vector-valued options

If true, then the pooling is configured with pool_dims, pool_pads, pool_strides. Otherwise, pool_dims_i, pool_pads_i, pool_strides_i.

pool_dims

(string) Pooling window dimensions (vector-valued)

List of integers, one for each spatial dimension. Used when has_vectors is enabled.

pool_pads

(string) Pooling padding (vector-valued)

List of integers, one for each spatial dimension. Used when has_vectors is enabled.

pool_strides

(string) Pooling strides (vector-valued)

List of integers, one for each spatial dimension. Used when has_vectors is enabled.

pool_dims_i

(int64) Pooling window dimension (integer-valued)

Used when has_vectors is disabled.

pool_pads_i

(int64) Pooling padding (integer-valued)

Used when has_vectors is disabled.

pool_strides_i

(int64) Pooling stride (integer-valued)

Used when has_vectors is disabled.

Back to Top


Reduction

The Reduction layer reduces a tensor to a scalar.

Arguments:

mode

(string, optional) Reduction operation

Options: sum (default) or mean

Back to Top


Reshape

The Reshape layer reinterprets a tensor with new dimensions.

The input and output tensors must have the same number of entries. This layer is very cheap since it just involves setting up tensor views.

Arguments:

dims

(string) Tensor dimensions

List of integers. A single dimension may be -1, in which case the dimension is inferred.

Deprecated and unused arguments:

num_dims

(int64)

Back to Top


Scatter

The Scatter layer scatters values to specified tensor indices. Expects two input tensors: an \(N\) -D data tensor and a 1D index vector. For 1D data:

\[y[\text{ind}[i]] = x[i]\]

Out-of-range indices are ignored.

For higher-dimensional data, the layer performs a scatter along one dimension. For example, with 2D data and axis=1,

\[y[i,\text{ind}[j]] = x[i,j]\]

Currently, only 1D and 2D data is supported.

The size of the index vector must match the size of the data tensor along the scatter dimension.

Todo

Support higher-dimensional data

Arguments:

dims

(string) Output tensor dimensions

List of integers. Number of dimensions must match data tensor.

axis

(google.protobuf.UInt64Value) Dimension to scatter along

Back to Top


Slice

The Slice layer slices a tensor along a specified dimension. The tensor is split along one dimension at user-specified points, and each child layer recieves one piece.

Arguments:

axis

(int64) Tensor dimension to slice along

slice_points

(string) Positions at which to slice tensor

List of integers. Slice points must be in ascending order and the number of slice points must be one greater than the number of child layers.

Deprecated arguments:

get_slice_points_from_reader

(string) Do not use unless using the Jag dataset.

Back to Top


Sort

The Sort layer sorts tensor entries.

Arguments:

descending

(bool) Sort entries in descending order

Back to Top


Split

The Split layer outputs the input tensor to multiple child layers.

Rarely needed by users. This layer is used internally to handle cases where a layer outputs the same tensor to multiple child layers. From a usage perspective, there is little difference from an identity layer.

This is not to be confused with the split operation in NumPy, PyTorch or TensorFlow. The name refers to splits in the compute graph.

Arguments: None

Back to Top


StopGradient

The StopGradient layer blocks error signals during back propagation.

The output is identical to the input, but the back propagation output (i.e. the error signal) is always zero. Compare with the stop_gradient operation in TensorFlow and Keras. Note that this means that computed gradients in preceeding layers are not exact gradients of the objective function.

Arguments: None

Back to Top


Sum

The Sum layer calculates the element-wise sum of each of the input tensors.

Arguments: None

Back to Top


TensorPermute

The TensorPermute layer permutes the indices of a tensor, similar to a transposition.

It expects one input tensor of order N, and a length N array of permuted indices [0..N-1], with respect to the input tensor dimensions. Therefore, passing axes=[0,1,2] for a rank-3 tensor will invoke a copy.

At this time, only permutations are supported. Each index must be accounted for in the permuted array.

Arguments:

axes

(uint32) Permuted tensor dimensions

List of integers

Back to Top


Tessellate

The Tessallate layer repeats a tensor until it matches specified dimensions.

The output tensor dimensions do not need to be integer multiples of the input dimensions. Compare with the NumPy tile function.

As an example, tessellating a \(2 \times 2\) matrix into a \(3 \times 4\) matrix looks like the following:

\[\begin{split}\begin{bmatrix} 1 & 2 \\ 3 & 4 \end{bmatrix} \rightarrow \begin{bmatrix} 1 & 2 & 1 & 2 \\ 3 & 4 & 3 & 4 \\ 1 & 2 & 1 & 2 \end{bmatrix}\end{split}\]

Arguments:

dims

(string) Output tensor dimensions

List of integers

Back to Top


Uniform

The Uniform layer is a random tensor with a uniform distribution.

Arguments:

min

(double) Distribution minimum

max

(double) Distribution maximum

neuron_dims

(string) Tensor dimensions

List of integers

training_only

(bool) Only generate random values during training

If true, the tensor is filled with the distribution mean during evaluation.

Back to Top


Unpooling

The Unpooling layer is the transpose of the pooling layer. It is required that the pooling layer be set as the hint layer.

Warning

This has not been well maintained and is probably broken.

Todo

GPU support.

Arguments:

num_dims

(int64) Number of spatial dimensions

The first data dimension is treated as the channel dimension, and all others are treated as spatial dimensions (recall that the mini-batch dimension is implicit).

Back to Top


WeightedSum

The WeightedSum layer adds tensors with scaling factors.

Arguments:

scaling_factors

(string) List of floating-point numbers, one for each input tensor.

Back to Top


WeightsLayer

The WeightsLayer outputs values from a weights tensor. Interfaces with a weights object.

Arguments:

dims

(string) Weights tensor dimensions

List of integers

Back to Top


CategoricalRandom (Deprecated)

The CategoricalRandom layer is deprecated.

Arguments: None

Back to Top


DiscreteRandom (Deprecated)

The DiscreteRandom layer is deprecated.

Arguments:

values

(string)

dims

(string)

Back to Top