symbol¶
Symbol API of MXNet.
Functions
|
Applies an activation function element-wise to the input. |
|
Batch normalization. |
|
Batch normalization. |
|
Applies bilinear sampling to input feature map. |
|
Stops gradient computation. |
|
Connectionist Temporal Classification Loss. |
|
Casts all elements of the input to a new type. |
|
Joins input arrays along a given axis. |
|
Compute N-D convolution on (N+2)-D input. |
|
This operator is DEPRECATED. |
|
Applies correlation to inputs. |
|
Note Crop is deprecated. Use slice instead. |
|
Apply a custom operator implemented in a frontend language (like Python). |
|
Computes 1D or 2D transposed convolution (aka fractionally strided convolution) of the input tensor. |
|
Applies dropout operation to input array. |
|
Adds all input arguments element-wise. |
|
Maps integer indices to vector representations (embeddings). |
|
Flattens the input array into a 2-D array by collapsing the higher dimensions. |
|
Applies a linear transformation: \(Y = XW^T + b\). |
|
Generates 2D sampling grid for bilinear sampling. |
|
Group normalization. |
|
Apply a sparse regularization to the output a sigmoid activation function. |
|
Applies instance normalization to the n-dimensional input array. |
|
Normalize the input array using the L2 norm. |
|
Applies local response normalization to the input. |
|
Layer normalization. |
|
Applies Leaky rectified linear unit activation element-wise to the input. |
|
Computes and optimizes for squared loss during backward propagation. |
|
Applies a logistic function to the input. |
|
Computes mean absolute error of the input. |
|
Make your own loss function in network construction. |
|
Pads an input array with a constant or edge values of the array. |
|
Performs pooling on the input. |
|
This operator is DEPRECATED. |
|
Applies recurrent layers to input data. |
|
Performs region of interest(ROI) pooling on the input array. |
|
Reshapes the input array. |
|
Computes support vector machine based transformation of the input. |
|
Takes the last element of a sequence. |
|
Sets all elements outside the sequence to a constant value. |
|
Reverses the elements of each sequence. |
|
Splits an array along a particular axis into multiple sub-arrays. |
|
Computes the gradient of cross entropy loss with respect to softmax output. |
|
Applies softmax activation to input. |
|
Computes the gradient of cross entropy loss with respect to softmax output. |
|
Applies a spatial transformer to input feature map. |
|
Interchanges two axes of an array. |
|
Upsamples the given input data. |
|
Returns element-wise absolute value of the input. |
|
Update function for Adam optimizer. |
|
Adds all input arguments element-wise. |
|
Check if all the float numbers in the array are finite (used for AMP) |
|
Cast function between low precision float/FP32 used by AMP. |
|
Cast function used by AMP, that casts its inputs to the common widest type. |
|
Returns element-wise inverse cosine of the input array. |
|
Returns the element-wise inverse hyperbolic cosine of the input array, computed element-wise. |
|
Returns element-wise inverse sine of the input array. |
|
Returns the element-wise inverse hyperbolic sine of the input array, computed element-wise. |
|
Returns element-wise inverse tangent of the input array. |
|
Returns the element-wise inverse hyperbolic tangent of the input array, computed element-wise. |
|
Returns indices of the maximum values along an axis. |
|
Returns argmax indices of each channel from the input array. |
|
Returns indices of the minimum values along an axis. |
|
Returns the indices that would sort an input array along the given axis. |
|
Batchwise dot product. |
|
Takes elements from a data batch. |
|
Returns element-wise sum of the input arrays with broadcasting. |
|
Broadcasts the input array over particular axes. |
|
Broadcasts the input array over particular axes. |
|
Returns element-wise division of the input arrays with broadcasting. |
|
Returns the result of element-wise equal to (==) comparison operation with broadcasting. |
|
Returns the result of element-wise greater than (>) comparison operation with broadcasting. |
|
Returns the result of element-wise greater than or equal to (>=) comparison operation with broadcasting. |
|
Returns the hypotenuse of a right angled triangle, given its “legs” with broadcasting. |
|
Returns the result of element-wise lesser than (<) comparison operation with broadcasting. |
|
Returns the result of element-wise lesser than or equal to (<=) comparison operation with broadcasting. |
|
Broadcasts lhs to have the same shape as rhs. |
|
Returns the result of element-wise logical and with broadcasting. |
|
Returns the result of element-wise logical or with broadcasting. |
|
Returns the result of element-wise logical xor with broadcasting. |
|
Returns element-wise maximum of the input arrays with broadcasting. |
|
Returns element-wise minimum of the input arrays with broadcasting. |
|
Returns element-wise difference of the input arrays with broadcasting. |
|
Returns element-wise modulo of the input arrays with broadcasting. |
|
Returns element-wise product of the input arrays with broadcasting. |
|
Returns the result of element-wise not equal to (!=) comparison operation with broadcasting. |
|
Returns element-wise sum of the input arrays with broadcasting. |
|
Returns result of first array elements raised to powers from second array, element-wise with broadcasting. |
|
Returns element-wise difference of the input arrays with broadcasting. |
|
Broadcasts the input array to a new shape. |
|
Casts all elements of the input to a new type. |
|
Casts tensor storage type to the new type. |
|
Returns element-wise cube-root value of the input. |
|
Returns element-wise ceiling of the input. |
|
Picks elements from an input array according to the input indices along the given axis. |
|
Clips (limits) the values in an array. |
|
Combining the output column matrix of im2col back to image array. |
|
Joins input arrays along a given axis. |
|
Computes the element-wise cosine of the input array. |
|
Returns the hyperbolic cosine of the input array, computed element-wise. |
|
Slices a region of the array. |
|
Connectionist Temporal Classification Loss. |
|
Return the cumulative sum of the elements along a given axis. |
|
Converts each element of the input array from radians to degrees. |
|
Rearranges(permutes) data from depth into blocks of spatial data. |
|
Extracts a diagonal or constructs a diagonal array. |
|
Dot product of two arrays. |
|
Adds arguments element-wise. |
|
Divides arguments element-wise. |
|
Multiplies arguments element-wise. |
|
Subtracts arguments element-wise. |
|
Returns element-wise gauss error function of the input. |
|
Returns element-wise inverse gauss error function of the input. |
|
Returns element-wise exponential value of the input. |
|
Inserts a new axis of size 1 into the array shape For example, given |
|
Returns |
|
Fill one element of each line(row for python, column for R/Julia) in lhs according to index indicated by rhs and values indicated by mhs. |
|
Returns element-wise rounded value to the nearest integer towards zero of the input. |
|
Flattens the input array into a 2-D array by collapsing the higher dimensions. |
|
Reverses the order of elements along given axis while preserving array shape. |
|
Returns element-wise floor of the input. |
|
The FTML optimizer described in FTML - Follow the Moving Leader in Deep Learning, available at http://proceedings.mlr.press/v70/zheng17a/zheng17a.pdf. |
|
Update function for Ftrl optimizer. |
|
Returns the gamma function (extension of the factorial function to the reals), computed element-wise on the input array. |
|
Returns element-wise log of the absolute value of the gamma function of the input. |
|
Gather elements or slices from data and store to a tensor whose shape is defined by indices. |
|
Computes hard sigmoid of x element-wise. |
|
Returns a copy of the input. |
|
Extract sliding blocks from input array. |
|
Computes the Khatri-Rao product of the input matrices. |
|
Phase I of lamb update it performs the following operations and returns g:. |
|
Phase II of lamb update it performs the following operations and updates grad. |
|
Compute the determinant of a matrix. |
|
Extracts the diagonal entries of a square matrix. |
|
Extracts a triangular sub-matrix from a square matrix. |
|
LQ factorization for general matrix. |
|
Performs general matrix multiplication and accumulation. |
|
Performs general matrix multiplication. |
|
Compute the inverse of a matrix. |
|
Constructs a square matrix with the input as diagonal. |
|
Constructs a square matrix with the input representing a specific triangular sub-matrix. |
|
Performs Cholesky factorization of a symmetric positive-definite matrix. |
|
Performs matrix inversion from a Cholesky factorization. |
|
Compute the sign and log of the determinant of a matrix. |
|
Computes the sum of the logarithms of the diagonal elements of a square matrix. |
|
Multiplication of matrix with its transpose. |
|
Performs multiplication with a lower triangular matrix. |
|
Solves matrix equation involving a lower triangular matrix. |
|
Returns element-wise Natural logarithmic value of the input. |
|
Returns element-wise Base-10 logarithmic value of the input. |
|
Returns element-wise |
|
Returns element-wise Base-2 logarithmic value of the input. |
|
Computes the log softmax of the input. |
|
Returns the result of logical NOT (!) function |
|
Make your own loss function in network construction. |
|
Computes the max of array elements over given axes. |
|
Computes the max of array elements over given axes. |
|
Computes the mean of array elements over given axes. |
|
Computes the min of array elements over given axes. |
|
Computes the min of array elements over given axes. |
|
Calculate the mean and variance of data. |
|
Mixed Precision version of Phase I of lamb update it performs the following operations and returns g:. |
|
Mixed Precision version Phase II of lamb update it performs the following operations and updates grad. |
|
Update function for multi-precision Nesterov Accelerated Gradient( NAG) optimizer. |
|
Updater function for multi-precision sgd optimizer |
|
Updater function for multi-precision sgd optimizer |
|
Check if all the float numbers in all the arrays are finite (used for AMP) |
|
Compute the LARS coefficients of multiple weights and grads from their sums of square” |
|
Momentum update function for multi-precision Stochastic Gradient Descent (SGD) optimizer. |
|
Update function for multi-precision Stochastic Gradient Descent (SDG) optimizer. |
|
Momentum update function for Stochastic Gradient Descent (SGD) optimizer. |
|
Update function for Stochastic Gradient Descent (SDG) optimizer. |
|
Compute the sums of squares of multiple arrays |
|
Update function for Nesterov Accelerated Gradient( NAG) optimizer. |
|
Computes the product of array elements over given axes treating Not a Numbers ( |
|
Computes the sum of array elements over given axes treating Not a Numbers ( |
|
Numerical negative of the argument, element-wise. |
|
Computes the norm on an NDArray. |
|
Draw random samples from a normal (Gaussian) distribution. |
|
Returns a one-hot array. |
|
Return an array of ones with the same shape and type as the input array. |
|
Pads an input array with a constant or edge values of the array. |
|
Picks elements from an input array according to the input indices along the given axis. |
|
Momentum update function for multi-precision Stochastic Gradient Descent (SGD) optimizer. |
|
Update function for multi-precision Stochastic Gradient Descent (SDG) optimizer. |
|
Momentum update function for Stochastic Gradient Descent (SGD) optimizer. |
|
Update function for Stochastic Gradient Descent (SDG) optimizer. |
|
Computes the product of array elements over given axes. |
|
Converts each element of the input array from degrees to radians. |
|
Draw random samples from an exponential distribution. |
|
Draw random samples from a gamma distribution. |
|
Draw random samples from a generalized negative binomial distribution. |
|
Draw random samples from a negative binomial distribution. |
|
Draw random samples from a normal (Gaussian) distribution. |
|
Computes the value of the PDF of sample of Dirichlet distributions with parameter alpha. |
|
Computes the value of the PDF of sample of exponential distributions with parameters lam (rate). |
|
Computes the value of the PDF of sample of gamma distributions with parameters alpha (shape) and beta (rate). |
Computes the value of the PDF of sample of generalized negative binomial distributions with parameters mu (mean) and alpha (dispersion). |
|
|
Computes the value of the PDF of samples of negative binomial distributions with parameters k (failure limit) and p (failure probability). |
|
Computes the value of the PDF of sample of normal distributions with parameters mu (mean) and sigma (standard deviation). |
|
Computes the value of the PDF of sample of Poisson distributions with parameters lam (rate). |
|
Computes the value of the PDF of sample of uniform distributions on the intervals given by [low,high). |
|
Draw random samples from a Poisson distribution. |
|
Draw random samples from a discrete uniform distribution. |
|
Draw random samples from a uniform distribution. |
|
Converts a batch of index arrays into an array of flat indices. |
|
Returns element-wise inverse cube-root value of the input. |
|
Returns the reciprocal of the argument, element-wise. |
|
Computes rectified linear activation. |
|
Repeats elements of an array. |
|
Set to zero multiple arrays |
|
Reshapes the input array. |
|
Reshape some or all dimensions of lhs to have the same shape as some or all dimensions of rhs. |
|
Reverses the order of elements along given axis while preserving array shape. |
|
Returns element-wise rounded value to the nearest integer of the input. |
|
Update function for RMSProp optimizer. |
|
Update function for RMSPropAlex optimizer. |
|
Returns element-wise rounded value to the nearest integer of the input. |
|
Returns element-wise inverse square-root value of the input. |
|
Concurrent sampling from multiple exponential distributions with parameters lambda (rate). |
|
Concurrent sampling from multiple gamma distributions with parameters alpha (shape) and beta (scale). |
|
Concurrent sampling from multiple generalized negative binomial distributions with parameters mu (mean) and alpha (dispersion). |
|
Concurrent sampling from multiple multinomial distributions. |
|
Concurrent sampling from multiple negative binomial distributions with parameters k (failure limit) and p (failure probability). |
|
Concurrent sampling from multiple normal distributions with parameters mu (mean) and sigma (standard deviation). |
|
Concurrent sampling from multiple Poisson distributions with parameters lambda (rate). |
|
Concurrent sampling from multiple uniform distributions on the intervals given by [low,high). |
|
Scatters data into a new tensor according to indices. |
|
Momentum update function for Stochastic Gradient Descent (SGD) optimizer. |
|
Update function for Stochastic Gradient Descent (SGD) optimizer. |
|
Returns a 1D int64 array containing the shape of data. |
|
Randomly shuffle the elements. |
|
Computes sigmoid of x element-wise. |
|
Returns element-wise sign of the input. |
|
Update function for SignSGD optimizer. |
|
SIGN momentUM (Signum) optimizer. |
|
Computes the element-wise sine of the input array. |
|
Returns the hyperbolic sine of the input array, computed element-wise. |
|
Returns a 1D int64 array containing the size of data. |
|
Slices a region of the array. |
|
Slices along a given axis. |
|
Slices a region of the array like the shape of another array. |
|
Calculate Smooth L1 Loss(lhs, scalar) by summing |
|
Applies the softmax function. |
|
Calculate cross entropy of softmax output and one-hot label. |
|
Applies the softmin function. |
|
Computes softsign of x element-wise. |
|
Returns a sorted copy of an input array along the given axis. |
|
Rearranges(permutes) blocks of spatial data into depth. |
|
Splits an array along a particular axis into multiple sub-arrays. |
|
Returns element-wise square-root value of the input. |
|
Returns element-wise squared value of the input. |
|
Remove single-dimensional entries from the shape of an array. |
|
Join a sequence of arrays along a new axis. |
|
Stops gradient computation. |
|
Computes the sum of array elements over given axes. |
|
Computes the sum of array elements over given axes. |
|
Interchanges two axes of an array. |
|
Takes elements from an input array along the given axis. |
|
Computes the element-wise tangent of the input array. |
|
Returns the hyperbolic tangent of the input array, computed element-wise. |
|
Repeats the whole array multiple times. |
|
Returns the indices of the top k elements in an input array along the given axis (by default). |
|
Permutes the dimensions of an array. |
|
Return the element-wise truncated value of the input. |
|
Draw random samples from a uniform distribution. |
|
Converts an array of flat indices into a batch of index arrays. |
|
Return the elements, either from x or y, depending on the condition. |
|
Return an array of zeros with the same shape, type and storage type as the input array. |
|
Creates a symbolic variable with specified name. |
|
Creates a symbolic variable with specified name. |
|
Creates a symbol that contains a collection of other symbols, grouped together. |
|
Loads symbol from a JSON file. |
|
Loads symbol from json string. |
|
Returns element-wise result of base element raised to powers from exp element. |
|
Returns element-wise result of base element raised to powers from exp element. |
|
Returns element-wise maximum of the input elements. |
|
Returns element-wise minimum of the input elements. |
|
Given the “legs” of a right triangle, returns its hypotenuse. |
|
Returns a new symbol of 2-D shpae, filled with ones on the diagonal and zeros elsewhere. |
|
Returns a new symbol of given shape and type, filled with zeros. |
|
Returns a new symbol of given shape and type, filled with ones. |
|
Returns a new array of given shape and type, filled with the given value val. |
|
Returns evenly spaced values within a given interval. |
|
Return evenly spaced numbers within a specified interval. |
|
Compute the histogram of the input data. |
|
Split an array into multiple sub-arrays. |
Classes
|
Symbol is symbolic graph of the mxnet. |
-
mxnet.symbol.
Activation
(data=None, act_type=_Null, name=None, attr=None, out=None, **kwargs)¶ Applies an activation function element-wise to the input.
The following activation functions are supported:
relu: Rectified Linear Unit, \(y = max(x, 0)\)
sigmoid: \(y = \frac{1}{1 + exp(-x)}\)
tanh: Hyperbolic tangent, \(y = \frac{exp(x) - exp(-x)}{exp(x) + exp(-x)}\)
softrelu: Soft ReLU, or SoftPlus, \(y = log(1 + exp(x))\)
softsign: \(y = \frac{x}{1 + abs(x)}\)
Defined in src/operator/nn/activation.cc:L164
- Parameters
data (Symbol) – The input array.
act_type ({'relu', 'sigmoid', 'softrelu', 'softsign', 'tanh'}, required) – Activation function to be applied.
name (string, optional.) – Name of the resulting symbol.
- Returns
The result symbol.
- Return type
Examples
A one-hidden-layer MLP with ReLU activation:
>>> data = Variable('data') >>> mlp = FullyConnected(data=data, num_hidden=128, name='proj') >>> mlp = Activation(data=mlp, act_type='relu', name='activation') >>> mlp = FullyConnected(data=mlp, num_hidden=10, name='mlp') >>> mlp <Symbol mlp>
ReLU activation
>>> test_suites = [ ... ('relu', lambda x: np.maximum(x, 0)), ... ('sigmoid', lambda x: 1 / (1 + np.exp(-x))), ... ('tanh', lambda x: np.tanh(x)), ... ('softrelu', lambda x: np.log(1 + np.exp(x))) ... ] >>> x = test_utils.random_arrays((2, 3, 4)) >>> for act_type, numpy_impl in test_suites: ... op = Activation(act_type=act_type, name='act') ... y = test_utils.simple_forward(op, act_data=x) ... y_np = numpy_impl(x) ... print('%s: %s' % (act_type, test_utils.almost_equal(y, y_np))) relu: True sigmoid: True tanh: True softrelu: True
-
mxnet.symbol.
BatchNorm
(data=None, gamma=None, beta=None, moving_mean=None, moving_var=None, eps=_Null, momentum=_Null, fix_gamma=_Null, use_global_stats=_Null, output_mean_var=_Null, axis=_Null, cudnn_off=_Null, min_calib_range=_Null, max_calib_range=_Null, name=None, attr=None, out=None, **kwargs)¶ Batch normalization.
Normalizes a data batch by mean and variance, and applies a scale
gamma
as well as offsetbeta
.Assume the input has more than one dimension and we normalize along axis 1. We first compute the mean and variance along this axis:
\[\begin{split}data\_mean[i] = mean(data[:,i,:,...]) \\ data\_var[i] = var(data[:,i,:,...])\end{split}\]Then compute the normalized output, which has the same shape as input, as following:
\[out[:,i,:,...] = \frac{data[:,i,:,...] - data\_mean[i]}{\sqrt{data\_var[i]+\epsilon}} * gamma[i] + beta[i]\]Both mean and var returns a scalar by treating the input as a vector.
Assume the input has size k on axis 1, then both
gamma
andbeta
have shape (k,). Ifoutput_mean_var
is set to be true, then outputs bothdata_mean
and the inverse ofdata_var
, which are needed for the backward pass. Note that gradient of these two outputs are blocked.Besides the inputs and the outputs, this operator accepts two auxiliary states,
moving_mean
andmoving_var
, which are k-length vectors. They are global statistics for the whole dataset, which are updated by:moving_mean = moving_mean * momentum + data_mean * (1 - momentum) moving_var = moving_var * momentum + data_var * (1 - momentum)
If
use_global_stats
is set to be true, thenmoving_mean
andmoving_var
are used instead ofdata_mean
anddata_var
to compute the output. It is often used during inference.The parameter
axis
specifies which axis of the input shape denotes the ‘channel’ (separately normalized groups). The default is 1. Specifying -1 sets the channel axis to be the last item in the input shape.Both
gamma
andbeta
are learnable parameters. But iffix_gamma
is true, then setgamma
to 1 and its gradient to 0.Note
When
fix_gamma
is set to True, no sparse support is provided. Iffix_gamma is
set to False, the sparse tensors will fallback.Defined in src/operator/nn/batch_norm.cc:L608
- Parameters
data (Symbol) – Input data to batch normalization
gamma (Symbol) – gamma array
beta (Symbol) – beta array
moving_mean (Symbol) – running mean of input
moving_var (Symbol) – running variance of input
eps (double, optional, default=0.0010000000474974513) – Epsilon to prevent div 0. Must be no less than CUDNN_BN_MIN_EPSILON defined in cudnn.h when using cudnn (usually 1e-5)
momentum (float, optional, default=0.899999976) – Momentum for moving average
fix_gamma (boolean, optional, default=1) – Fix gamma while training
use_global_stats (boolean, optional, default=0) – Whether use global moving statistics instead of local batch-norm. This will force change batch-norm into a scale shift operator.
output_mean_var (boolean, optional, default=0) – Output the mean and inverse std
axis (int, optional, default='1') – Specify which shape axis the channel is specified
cudnn_off (boolean, optional, default=0) – Do not select CUDNN operator, if available
min_calib_range (float or None, optional, default=None) – The minimum scalar value in the form of float32 obtained through calibration. If present, it will be used to by quantized batch norm op to calculate primitive scale.Note: this calib_range is to calib bn output.
max_calib_range (float or None, optional, default=None) – The maximum scalar value in the form of float32 obtained through calibration. If present, it will be used to by quantized batch norm op to calculate primitive scale.Note: this calib_range is to calib bn output.
name (string, optional.) – Name of the resulting symbol.
- Returns
The result symbol.
- Return type
-
mxnet.symbol.
BatchNorm_v1
(data=None, gamma=None, beta=None, eps=_Null, momentum=_Null, fix_gamma=_Null, use_global_stats=_Null, output_mean_var=_Null, name=None, attr=None, out=None, **kwargs)¶ Batch normalization.
This operator is DEPRECATED. Perform BatchNorm on the input.
Normalizes a data batch by mean and variance, and applies a scale
gamma
as well as offsetbeta
.Assume the input has more than one dimension and we normalize along axis 1. We first compute the mean and variance along this axis:
\[\begin{split}data\_mean[i] = mean(data[:,i,:,...]) \\ data\_var[i] = var(data[:,i,:,...])\end{split}\]Then compute the normalized output, which has the same shape as input, as following:
\[out[:,i,:,...] = \frac{data[:,i,:,...] - data\_mean[i]}{\sqrt{data\_var[i]+\epsilon}} * gamma[i] + beta[i]\]Both mean and var returns a scalar by treating the input as a vector.
Assume the input has size k on axis 1, then both
gamma
andbeta
have shape (k,). Ifoutput_mean_var
is set to be true, then outputs bothdata_mean
anddata_var
as well, which are needed for the backward pass.Besides the inputs and the outputs, this operator accepts two auxiliary states,
moving_mean
andmoving_var
, which are k-length vectors. They are global statistics for the whole dataset, which are updated by:moving_mean = moving_mean * momentum + data_mean * (1 - momentum) moving_var = moving_var * momentum + data_var * (1 - momentum)
If
use_global_stats
is set to be true, thenmoving_mean
andmoving_var
are used instead ofdata_mean
anddata_var
to compute the output. It is often used during inference.Both
gamma
andbeta
are learnable parameters. But iffix_gamma
is true, then setgamma
to 1 and its gradient to 0.There’s no sparse support for this operator, and it will exhibit problematic behavior if used with sparse tensors.
Defined in src/operator/batch_norm_v1.cc:L94
- Parameters
data (Symbol) – Input data to batch normalization
gamma (Symbol) – gamma array
beta (Symbol) – beta array
eps (float, optional, default=0.00100000005) – Epsilon to prevent div 0
momentum (float, optional, default=0.899999976) – Momentum for moving average
fix_gamma (boolean, optional, default=1) – Fix gamma while training
use_global_stats (boolean, optional, default=0) – Whether use global moving statistics instead of local batch-norm. This will force change batch-norm into a scale shift operator.
output_mean_var (boolean, optional, default=0) – Output All,normal mean and var
name (string, optional.) – Name of the resulting symbol.
- Returns
The result symbol.
- Return type
-
mxnet.symbol.
BilinearSampler
(data=None, grid=None, cudnn_off=_Null, name=None, attr=None, out=None, **kwargs)¶ Applies bilinear sampling to input feature map.
Bilinear Sampling is the key of [NIPS2015] “Spatial Transformer Networks”. The usage of the operator is very similar to remap function in OpenCV, except that the operator has the backward pass.
Given \(data\) and \(grid\), then the output is computed by
\[\begin{split}x_{src} = grid[batch, 0, y_{dst}, x_{dst}] \\ y_{src} = grid[batch, 1, y_{dst}, x_{dst}] \\ output[batch, channel, y_{dst}, x_{dst}] = G(data[batch, channel, y_{src}, x_{src})\end{split}\]\(x_{dst}\), \(y_{dst}\) enumerate all spatial locations in \(output\), and \(G()\) denotes the bilinear interpolation kernel. The out-boundary points will be padded with zeros.The shape of the output will be (data.shape[0], data.shape[1], grid.shape[2], grid.shape[3]).
The operator assumes that \(data\) has ‘NCHW’ layout and \(grid\) has been normalized to [-1, 1].
BilinearSampler often cooperates with GridGenerator which generates sampling grids for BilinearSampler. GridGenerator supports two kinds of transformation:
affine
andwarp
. If users want to design a CustomOp to manipulate \(grid\), please firstly refer to the code of GridGenerator.Example 1:
## Zoom out data two times data = array([[[[1, 4, 3, 6], [1, 8, 8, 9], [0, 4, 1, 5], [1, 0, 1, 3]]]]) affine_matrix = array([[2, 0, 0], [0, 2, 0]]) affine_matrix = reshape(affine_matrix, shape=(1, 6)) grid = GridGenerator(data=affine_matrix, transform_type='affine', target_shape=(4, 4)) out = BilinearSampler(data, grid) out [[[[ 0, 0, 0, 0], [ 0, 3.5, 6.5, 0], [ 0, 1.25, 2.5, 0], [ 0, 0, 0, 0]]]
Example 2:
## shift data horizontally by -1 pixel data = array([[[[1, 4, 3, 6], [1, 8, 8, 9], [0, 4, 1, 5], [1, 0, 1, 3]]]]) warp_maxtrix = array([[[[1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1]], [[0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0]]]]) grid = GridGenerator(data=warp_matrix, transform_type='warp') out = BilinearSampler(data, grid) out [[[[ 4, 3, 6, 0], [ 8, 8, 9, 0], [ 4, 1, 5, 0], [ 0, 1, 3, 0]]]
Defined in src/operator/bilinear_sampler.cc:L255
- Parameters
- Returns
The result symbol.
- Return type
-
mxnet.symbol.
BlockGrad
(data=None, name=None, attr=None, out=None, **kwargs)¶ Stops gradient computation.
Stops the accumulated gradient of the inputs from flowing through this operator in the backward direction. In other words, this operator prevents the contribution of its inputs to be taken into account for computing gradients.
Example:
v1 = [1, 2] v2 = [0, 1] a = Variable('a') b = Variable('b') b_stop_grad = stop_gradient(3 * b) loss = MakeLoss(b_stop_grad + a) executor = loss.simple_bind(ctx=cpu(), a=(1,2), b=(1,2)) executor.forward(is_train=True, a=v1, b=v2) executor.outputs [ 1. 5.] executor.backward() executor.grad_arrays [ 0. 0.] [ 1. 1.]
Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L325
-
mxnet.symbol.
CTCLoss
(data=None, label=None, data_lengths=None, label_lengths=None, use_data_lengths=_Null, use_label_lengths=_Null, blank_label=_Null, name=None, attr=None, out=None, **kwargs)¶ Connectionist Temporal Classification Loss.
Note
The existing alias
contrib_CTCLoss
is deprecated.The shapes of the inputs and outputs:
data: (sequence_length, batch_size, alphabet_size)
label: (batch_size, label_sequence_length)
out: (batch_size)
The data tensor consists of sequences of activation vectors (without applying softmax), with i-th channel in the last dimension corresponding to i-th label for i between 0 and alphabet_size-1 (i.e always 0-indexed). Alphabet size should include one additional value reserved for blank label. When blank_label is
"first"
, the0
-th channel is be reserved for activation of blank label, or otherwise if it is “last”,(alphabet_size-1)
-th channel should be reserved for blank label.label
is an index matrix of integers. When blank_label is"first"
, the value 0 is then reserved for blank label, and should not be passed in this matrix. Otherwise, when blank_label is"last"
, the value (alphabet_size-1) is reserved for blank label.If a sequence of labels is shorter than label_sequence_length, use the special padding value at the end of the sequence to conform it to the correct length. The padding value is 0 when blank_label is
"first"
, and -1 otherwise.For example, suppose the vocabulary is [a, b, c], and in one batch we have three sequences ‘ba’, ‘cbb’, and ‘abac’. When blank_label is
"first"
, we can index the labels as {‘a’: 1, ‘b’: 2, ‘c’: 3}, and we reserve the 0-th channel for blank label in data tensor. The resulting label tensor should be padded to be:[[2, 1, 0, 0], [3, 2, 2, 0], [1, 2, 1, 3]]
When blank_label is
"last"
, we can index the labels as {‘a’: 0, ‘b’: 1, ‘c’: 2}, and we reserve the channel index 3 for blank label in data tensor. The resulting label tensor should be padded to be:[[1, 0, -1, -1], [2, 1, 1, -1], [0, 1, 0, 2]]
out
is a list of CTC loss values, one per example in the batch.See Connectionist Temporal Classification: Labelling Unsegmented Sequence Data with Recurrent Neural Networks, A. Graves et al. for more information on the definition and the algorithm.
Defined in src/operator/nn/ctc_loss.cc:L100
- Parameters
data (Symbol) – Input ndarray
label (Symbol) – Ground-truth labels for the loss.
data_lengths (Symbol) – Lengths of data for each of the samples. Only required when use_data_lengths is true.
label_lengths (Symbol) – Lengths of labels for each of the samples. Only required when use_label_lengths is true.
use_data_lengths (boolean, optional, default=0) – Whether the data lenghts are decided by data_lengths. If false, the lengths are equal to the max sequence length.
use_label_lengths (boolean, optional, default=0) – Whether the label lenghts are decided by label_lengths, or derived from padding_mask. If false, the lengths are derived from the first occurrence of the value of padding_mask. The value of padding_mask is
0
when first CTC label is reserved for blank, and-1
when last label is reserved for blank. See blank_label.blank_label ({'first', 'last'},optional, default='first') – Set the label that is reserved for blank label.If “first”, 0-th label is reserved, and label values for tokens in the vocabulary are between
1
andalphabet_size-1
, and the padding mask is-1
. If “last”, last label valuealphabet_size-1
is reserved for blank label instead, and label values for tokens in the vocabulary are between0
andalphabet_size-2
, and the padding mask is0
.name (string, optional.) – Name of the resulting symbol.
- Returns
The result symbol.
- Return type
-
mxnet.symbol.
Cast
(data=None, dtype=_Null, name=None, attr=None, out=None, **kwargs)¶ Casts all elements of the input to a new type.
Note
Cast
is deprecated. Usecast
instead.Example:
cast([0.9, 1.3], dtype='int32') = [0, 1] cast([1e20, 11.1], dtype='float16') = [inf, 11.09375] cast([300, 11.1, 10.9, -1, -3], dtype='uint8') = [44, 11, 10, 255, 253]
Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L664
-
mxnet.symbol.
Concat
(*data, **kwargs)¶ Joins input arrays along a given axis.
Note
Concat is deprecated. Use concat instead.
The dimensions of the input arrays should be the same except the axis along which they will be concatenated. The dimension of the output array along the concatenated axis will be equal to the sum of the corresponding dimensions of the input arrays.
The storage type of
concat
output depends on storage types of inputsconcat(csr, csr, …, csr, dim=0) = csr
otherwise,
concat
generates output with default storage
Example:
x = [[1,1],[2,2]] y = [[3,3],[4,4],[5,5]] z = [[6,6], [7,7],[8,8]] concat(x,y,z,dim=0) = [[ 1., 1.], [ 2., 2.], [ 3., 3.], [ 4., 4.], [ 5., 5.], [ 6., 6.], [ 7., 7.], [ 8., 8.]] Note that you cannot concat x,y,z along dimension 1 since dimension 0 is not the same for all the input arrays. concat(y,z,dim=1) = [[ 3., 3., 6., 6.], [ 4., 4., 7., 7.], [ 5., 5., 8., 8.]]
Defined in src/operator/nn/concat.cc:L384 This function support variable length of positional input.
- Parameters
data (Symbol[]) – List of arrays to concatenate
dim (int, optional, default='1') – the dimension to be concated.
name (string, optional.) – Name of the resulting symbol.
- Returns
The result symbol.
- Return type
Examples
Concat two (or more) inputs along a specific dimension:
>>> a = Variable('a') >>> b = Variable('b') >>> c = Concat(a, b, dim=1, name='my-concat') >>> c <Symbol my-concat> >>> SymbolDoc.get_output_shape(c, a=(128, 10, 3, 3), b=(128, 15, 3, 3)) {'my-concat_output': (128L, 25L, 3L, 3L)}
Note the shape should be the same except on the dimension that is being concatenated.
-
mxnet.symbol.
Convolution
(data=None, weight=None, bias=None, kernel=_Null, stride=_Null, dilate=_Null, pad=_Null, num_filter=_Null, num_group=_Null, workspace=_Null, no_bias=_Null, cudnn_tune=_Null, cudnn_off=_Null, layout=_Null, name=None, attr=None, out=None, **kwargs)¶ Compute N-D convolution on (N+2)-D input.
In the 2-D convolution, given input data with shape (batch_size, channel, height, width), the output is computed by
\[out[n,i,:,:] = bias[i] + \sum_{j=0}^{channel} data[n,j,:,:] \star weight[i,j,:,:]\]where \(\star\) is the 2-D cross-correlation operator.
For general 2-D convolution, the shapes are
data: (batch_size, channel, height, width)
weight: (num_filter, channel, kernel[0], kernel[1])
bias: (num_filter,)
out: (batch_size, num_filter, out_height, out_width).
Define:
f(x,k,p,s,d) = floor((x+2*p-d*(k-1)-1)/s)+1
then we have:
out_height=f(height, kernel[0], pad[0], stride[0], dilate[0]) out_width=f(width, kernel[1], pad[1], stride[1], dilate[1])
If
no_bias
is set to be true, then thebias
term is ignored.The default data
layout
is NCHW, namely (batch_size, channel, height, width). We can choose other layouts such as NWC.If
num_group
is larger than 1, denoted by g, then split the inputdata
evenly into g parts along the channel axis, and also evenly splitweight
along the first dimension. Next compute the convolution on the i-th part of the data with the i-th weight part. The output is obtained by concatenating all the g results.1-D convolution does not have height dimension but only width in space.
data: (batch_size, channel, width)
weight: (num_filter, channel, kernel[0])
bias: (num_filter,)
out: (batch_size, num_filter, out_width).
3-D convolution adds an additional depth dimension besides height and width. The shapes are
data: (batch_size, channel, depth, height, width)
weight: (num_filter, channel, kernel[0], kernel[1], kernel[2])
bias: (num_filter,)
out: (batch_size, num_filter, out_depth, out_height, out_width).
Both
weight
andbias
are learnable parameters.There are other options to tune the performance.
cudnn_tune: enable this option leads to higher startup time but may give faster speed. Options are
off: no tuning
limited_workspace:run test and pick the fastest algorithm that doesn’t exceed workspace limit.
fastest: pick the fastest algorithm and ignore workspace limit.
None (default): the behavior is determined by environment variable
MXNET_CUDNN_AUTOTUNE_DEFAULT
. 0 for off, 1 for limited workspace (default), 2 for fastest.
workspace: A large number leads to more (GPU) memory usage but may improve the performance.
Defined in src/operator/nn/convolution.cc:L475
- Parameters
data (Symbol) – Input data to the ConvolutionOp.
weight (Symbol) – Weight matrix.
bias (Symbol) – Bias parameter.
kernel (Shape(tuple), required) – Convolution kernel size: (w,), (h, w) or (d, h, w)
stride (Shape(tuple), optional, default=[]) – Convolution stride: (w,), (h, w) or (d, h, w). Defaults to 1 for each dimension.
dilate (Shape(tuple), optional, default=[]) – Convolution dilate: (w,), (h, w) or (d, h, w). Defaults to 1 for each dimension.
pad (Shape(tuple), optional, default=[]) – Zero pad for convolution: (w,), (h, w) or (d, h, w). Defaults to no padding.
num_filter (int (non-negative), required) – Convolution filter(channel) number
num_group (int (non-negative), optional, default=1) – Number of group partitions.
workspace (long (non-negative), optional, default=1024) – Maximum temporary workspace allowed (MB) in convolution.This parameter has two usages. When CUDNN is not used, it determines the effective batch size of the convolution kernel. When CUDNN is used, it controls the maximum temporary storage used for tuning the best CUDNN kernel when limited_workspace strategy is used.
no_bias (boolean, optional, default=0) – Whether to disable bias parameter.
cudnn_tune ({None, 'fastest', 'limited_workspace', 'off'},optional, default='None') – Whether to pick convolution algo by running performance test.
cudnn_off (boolean, optional, default=0) – Turn off cudnn for this layer.
layout ({None, 'NCDHW', 'NCHW', 'NCW', 'NDHWC', 'NHWC'},optional, default='None') – Set layout for input, output and weight. Empty for default layout: NCW for 1d, NCHW for 2d and NCDHW for 3d.NHWC and NDHWC are only supported on GPU.
name (string, optional.) – Name of the resulting symbol.
- Returns
The result symbol.
- Return type
-
mxnet.symbol.
Convolution_v1
(data=None, weight=None, bias=None, kernel=_Null, stride=_Null, dilate=_Null, pad=_Null, num_filter=_Null, num_group=_Null, workspace=_Null, no_bias=_Null, cudnn_tune=_Null, cudnn_off=_Null, layout=_Null, name=None, attr=None, out=None, **kwargs)¶ This operator is DEPRECATED. Apply convolution to input then add a bias.
- Parameters
data (Symbol) – Input data to the ConvolutionV1Op.
weight (Symbol) – Weight matrix.
bias (Symbol) – Bias parameter.
kernel (Shape(tuple), required) – convolution kernel size: (h, w) or (d, h, w)
stride (Shape(tuple), optional, default=[]) – convolution stride: (h, w) or (d, h, w)
dilate (Shape(tuple), optional, default=[]) – convolution dilate: (h, w) or (d, h, w)
pad (Shape(tuple), optional, default=[]) – pad for convolution: (h, w) or (d, h, w)
num_filter (int (non-negative), required) – convolution filter(channel) number
num_group (int (non-negative), optional, default=1) – Number of group partitions. Equivalent to slicing input into num_group partitions, apply convolution on each, then concatenate the results
workspace (long (non-negative), optional, default=1024) – Maximum temporary workspace allowed for convolution (MB).This parameter determines the effective batch size of the convolution kernel, which may be smaller than the given batch size. Also, the workspace will be automatically enlarged to make sure that we can run the kernel with batch_size=1
no_bias (boolean, optional, default=0) – Whether to disable bias parameter.
cudnn_tune ({None, 'fastest', 'limited_workspace', 'off'},optional, default='None') – Whether to pick convolution algo by running performance test. Leads to higher startup time but may give faster speed. Options are: ‘off’: no tuning ‘limited_workspace’: run test and pick the fastest algorithm that doesn’t exceed workspace limit. ‘fastest’: pick the fastest algorithm and ignore workspace limit. If set to None (default), behavior is determined by environment variable MXNET_CUDNN_AUTOTUNE_DEFAULT: 0 for off, 1 for limited workspace (default), 2 for fastest.
cudnn_off (boolean, optional, default=0) – Turn off cudnn for this layer.
layout ({None, 'NCDHW', 'NCHW', 'NDHWC', 'NHWC'},optional, default='None') – Set layout for input, output and weight. Empty for default layout: NCHW for 2d and NCDHW for 3d.
name (string, optional.) – Name of the resulting symbol.
- Returns
The result symbol.
- Return type
-
mxnet.symbol.
Correlation
(data1=None, data2=None, kernel_size=_Null, max_displacement=_Null, stride1=_Null, stride2=_Null, pad_size=_Null, is_multiply=_Null, name=None, attr=None, out=None, **kwargs)¶ Applies correlation to inputs.
The correlation layer performs multiplicative patch comparisons between two feature maps.
Given two multi-channel feature maps \(f_{1}, f_{2}\), with \(w\), \(h\), and \(c\) being their width, height, and number of channels, the correlation layer lets the network compare each patch from \(f_{1}\) with each patch from \(f_{2}\).
For now we consider only a single comparison of two patches. The ‘correlation’ of two patches centered at \(x_{1}\) in the first map and \(x_{2}\) in the second map is then defined as:
\[c(x_{1}, x_{2}) = \sum_{o \in [-k,k] \times [-k,k]} <f_{1}(x_{1} + o), f_{2}(x_{2} + o)>\]for a square patch of size \(K:=2k+1\).
Note that the equation above is identical to one step of a convolution in neural networks, but instead of convolving data with a filter, it convolves data with other data. For this reason, it has no training weights.
Computing \(c(x_{1}, x_{2})\) involves \(c * K^{2}\) multiplications. Comparing all patch combinations involves \(w^{2}*h^{2}\) such computations.
Given a maximum displacement \(d\), for each location \(x_{1}\) it computes correlations \(c(x_{1}, x_{2})\) only in a neighborhood of size \(D:=2d+1\), by limiting the range of \(x_{2}\). We use strides \(s_{1}, s_{2}\), to quantize \(x_{1}\) globally and to quantize \(x_{2}\) within the neighborhood centered around \(x_{1}\).
The final output is defined by the following expression:
\[out[n, q, i, j] = c(x_{i, j}, x_{q})\]where \(i\) and \(j\) enumerate spatial locations in \(f_{1}\), and \(q\) denotes the \(q^{th}\) neighborhood of \(x_{i,j}\).
Defined in src/operator/correlation.cc:L197
- Parameters
data1 (Symbol) – Input data1 to the correlation.
data2 (Symbol) – Input data2 to the correlation.
kernel_size (int (non-negative), optional, default=1) – kernel size for Correlation must be an odd number
max_displacement (int (non-negative), optional, default=1) – Max displacement of Correlation
stride1 (int (non-negative), optional, default=1) – stride1 quantize data1 globally
stride2 (int (non-negative), optional, default=1) – stride2 quantize data2 within the neighborhood centered around data1
pad_size (int (non-negative), optional, default=0) – pad for Correlation
is_multiply (boolean, optional, default=1) – operation type is either multiplication or subduction
name (string, optional.) – Name of the resulting symbol.
- Returns
The result symbol.
- Return type
-
mxnet.symbol.
Crop
(*data, **kwargs)¶ Note
Crop is deprecated. Use slice instead.
Crop the 2nd and 3rd dim of input data, with the corresponding size of h_w or with width and height of the second input symbol, i.e., with one input, we need h_w to specify the crop height and width, otherwise the second input symbol’s size will be used
Defined in src/operator/crop.cc:L49 This function support variable length of positional input.
- Parameters
data (Symbol or Symbol[]) – Tensor or List of Tensors, the second input will be used as crop_like shape reference
offset (Shape(tuple), optional, default=[0,0]) – crop offset coordinate: (y, x)
h_w (Shape(tuple), optional, default=[0,0]) – crop height and width: (h, w)
center_crop (boolean, optional, default=0) – If set to true, then it will use be the center_crop,or it will crop using the shape of crop_like
name (string, optional.) – Name of the resulting symbol.
- Returns
The result symbol.
- Return type
-
mxnet.symbol.
Custom
(*data, **kwargs)¶ Apply a custom operator implemented in a frontend language (like Python).
Custom operators should override required methods like forward and backward. The custom operator must be registered before it can be used. Please check the tutorial here: https://mxnet.incubator.apache.org/api/faq/new_op
Defined in src/operator/custom/custom.cc:L546
-
mxnet.symbol.
Deconvolution
(data=None, weight=None, bias=None, kernel=_Null, stride=_Null, dilate=_Null, pad=_Null, adj=_Null, target_shape=_Null, num_filter=_Null, num_group=_Null, workspace=_Null, no_bias=_Null, cudnn_tune=_Null, cudnn_off=_Null, layout=_Null, name=None, attr=None, out=None, **kwargs)¶ Computes 1D or 2D transposed convolution (aka fractionally strided convolution) of the input tensor. This operation can be seen as the gradient of Convolution operation with respect to its input. Convolution usually reduces the size of the input. Transposed convolution works the other way, going from a smaller input to a larger output while preserving the connectivity pattern.
- Parameters
data (Symbol) – Input tensor to the deconvolution operation.
weight (Symbol) – Weights representing the kernel.
bias (Symbol) – Bias added to the result after the deconvolution operation.
kernel (Shape(tuple), required) – Deconvolution kernel size: (w,), (h, w) or (d, h, w). This is same as the kernel size used for the corresponding convolution
stride (Shape(tuple), optional, default=[]) – The stride used for the corresponding convolution: (w,), (h, w) or (d, h, w). Defaults to 1 for each dimension.
dilate (Shape(tuple), optional, default=[]) – Dilation factor for each dimension of the input: (w,), (h, w) or (d, h, w). Defaults to 1 for each dimension.
pad (Shape(tuple), optional, default=[]) – The amount of implicit zero padding added during convolution for each dimension of the input: (w,), (h, w) or (d, h, w).
(kernel-1)/2
is usually a good choice. If target_shape is set, pad will be ignored and a padding that will generate the target shape will be used. Defaults to no padding.adj (Shape(tuple), optional, default=[]) – Adjustment for output shape: (w,), (h, w) or (d, h, w). If target_shape is set, adj will be ignored and computed accordingly.
target_shape (Shape(tuple), optional, default=[]) – Shape of the output tensor: (w,), (h, w) or (d, h, w).
num_filter (int (non-negative), required) – Number of output filters.
num_group (int (non-negative), optional, default=1) – Number of groups partition.
workspace (long (non-negative), optional, default=512) – Maximum temporary workspace allowed (MB) in deconvolution.This parameter has two usages. When CUDNN is not used, it determines the effective batch size of the deconvolution kernel. When CUDNN is used, it controls the maximum temporary storage used for tuning the best CUDNN kernel when limited_workspace strategy is used.
no_bias (boolean, optional, default=1) – Whether to disable bias parameter.
cudnn_tune ({None, 'fastest', 'limited_workspace', 'off'},optional, default='None') – Whether to pick convolution algorithm by running performance test.
cudnn_off (boolean, optional, default=0) – Turn off cudnn for this layer.
layout ({None, 'NCDHW', 'NCHW', 'NCW', 'NDHWC', 'NHWC'},optional, default='None') – Set layout for input, output and weight. Empty for default layout, NCW for 1d, NCHW for 2d and NCDHW for 3d.NHWC and NDHWC are only supported on GPU.
name (string, optional.) – Name of the resulting symbol.
- Returns
The result symbol.
- Return type
-
mxnet.symbol.
Dropout
(data=None, p=_Null, mode=_Null, axes=_Null, cudnn_off=_Null, name=None, attr=None, out=None, **kwargs)¶ Applies dropout operation to input array.
During training, each element of the input is set to zero with probability p. The whole array is rescaled by \(1/(1-p)\) to keep the expected sum of the input unchanged.
During testing, this operator does not change the input if mode is ‘training’. If mode is ‘always’, the same computaion as during training will be applied.
Example:
random.seed(998) input_array = array([[3., 0.5, -0.5, 2., 7.], [2., -0.4, 7., 3., 0.2]]) a = symbol.Variable('a') dropout = symbol.Dropout(a, p = 0.2) executor = dropout.simple_bind(a = input_array.shape) ## If training executor.forward(is_train = True, a = input_array) executor.outputs [[ 3.75 0.625 -0. 2.5 8.75 ] [ 2.5 -0.5 8.75 3.75 0. ]] ## If testing executor.forward(is_train = False, a = input_array) executor.outputs [[ 3. 0.5 -0.5 2. 7. ] [ 2. -0.4 7. 3. 0.2 ]]
Defined in src/operator/nn/dropout.cc:L95
- Parameters
data (Symbol) – Input array to which dropout will be applied.
p (float, optional, default=0.5) – Fraction of the input that gets dropped out during training time.
mode ({'always', 'training'},optional, default='training') – Whether to only turn on dropout during training or to also turn on for inference.
axes (Shape(tuple), optional, default=[]) – Axes for variational dropout kernel.
cudnn_off (boolean or None, optional, default=0) – Whether to turn off cudnn in dropout operator. This option is ignored if axes is specified.
name (string, optional.) – Name of the resulting symbol.
- Returns
The result symbol.
- Return type
Examples
Apply dropout to corrupt input as zero with probability 0.2:
>>> data = Variable('data') >>> data_dp = Dropout(data=data, p=0.2)
>>> shape = (100, 100) # take larger shapes to be more statistical stable >>> x = np.ones(shape) >>> op = Dropout(p=0.5, name='dp') >>> # dropout is identity during testing >>> y = test_utils.simple_forward(op, dp_data=x, is_train=False) >>> test_utils.almost_equal(x, y) True >>> y = test_utils.simple_forward(op, dp_data=x, is_train=True) >>> # expectation is (approximately) unchanged >>> np.abs(x.mean() - y.mean()) < 0.1 True >>> set(np.unique(y)) == set([0, 2]) True
-
mxnet.symbol.
ElementWiseSum
(*args, **kwargs)¶ Adds all input arguments element-wise.
\[add\_n(a_1, a_2, ..., a_n) = a_1 + a_2 + ... + a_n\]add_n
is potentially more efficient than callingadd
by n times.The storage type of
add_n
output depends on storage types of inputsadd_n(row_sparse, row_sparse, ..) = row_sparse
add_n(default, csr, default) = default
add_n(any input combinations longer than 4 (>4) with at least one default type) = default
otherwise,
add_n
falls all inputs back to default storage and generates default storage
Defined in src/operator/tensor/elemwise_sum.cc:L155 This function support variable length of positional input.
-
mxnet.symbol.
Embedding
(data=None, weight=None, input_dim=_Null, output_dim=_Null, dtype=_Null, sparse_grad=_Null, name=None, attr=None, out=None, **kwargs)¶ Maps integer indices to vector representations (embeddings).
This operator maps words to real-valued vectors in a high-dimensional space, called word embeddings. These embeddings can capture semantic and syntactic properties of the words. For example, it has been noted that in the learned embedding spaces, similar words tend to be close to each other and dissimilar words far apart.
For an input array of shape (d1, …, dK), the shape of an output array is (d1, …, dK, output_dim). All the input values should be integers in the range [0, input_dim).
If the input_dim is ip0 and output_dim is op0, then shape of the embedding weight matrix must be (ip0, op0).
When “sparse_grad” is False, if any index mentioned is too large, it is replaced by the index that addresses the last vector in an embedding matrix. When “sparse_grad” is True, an error will be raised if invalid indices are found.
Examples:
input_dim = 4 output_dim = 5 // Each row in weight matrix y represents a word. So, y = (w0,w1,w2,w3) y = [[ 0., 1., 2., 3., 4.], [ 5., 6., 7., 8., 9.], [ 10., 11., 12., 13., 14.], [ 15., 16., 17., 18., 19.]] // Input array x represents n-grams(2-gram). So, x = [(w1,w3), (w0,w2)] x = [[ 1., 3.], [ 0., 2.]] // Mapped input x to its vector representation y. Embedding(x, y, 4, 5) = [[[ 5., 6., 7., 8., 9.], [ 15., 16., 17., 18., 19.]], [[ 0., 1., 2., 3., 4.], [ 10., 11., 12., 13., 14.]]]
The storage type of weight can be either row_sparse or default.
Note
If “sparse_grad” is set to True, the storage type of gradient w.r.t weights will be “row_sparse”. Only a subset of optimizers support sparse gradients, including SGD, AdaGrad and Adam. Note that by default lazy updates is turned on, which may perform differently from standard updates. For more details, please check the Optimization API at: https://mxnet.incubator.apache.org/api/python/optimization/optimization.html
Defined in src/operator/tensor/indexing_op.cc:L597
- Parameters
data (Symbol) – The input array to the embedding operator.
weight (Symbol) – The embedding weight matrix.
input_dim (int, required) – Vocabulary size of the input indices.
output_dim (int, required) – Dimension of the embedding vectors.
dtype ({'bfloat16', 'float16', 'float32', 'float64', 'int32', 'int64', 'int8', 'uint8'},optional, default='float32') – Data type of weight.
sparse_grad (boolean, optional, default=0) – Compute row sparse gradient in the backward calculation. If set to True, the grad’s storage type is row_sparse.
name (string, optional.) – Name of the resulting symbol.
- Returns
The result symbol.
- Return type
Examples
Assume we want to map the 26 English alphabet letters to 16-dimensional vectorial representations.
>>> vocabulary_size = 26 >>> embed_dim = 16 >>> seq_len, batch_size = (10, 64) >>> input = Variable('letters') >>> op = Embedding(data=input, input_dim=vocabulary_size, output_dim=embed_dim, ...name='embed') >>> SymbolDoc.get_output_shape(op, letters=(seq_len, batch_size)) {'embed_output': (10L, 64L, 16L)}
>>> vocab_size, embed_dim = (26, 16) >>> batch_size = 12 >>> word_vecs = test_utils.random_arrays((vocab_size, embed_dim)) >>> op = Embedding(name='embed', input_dim=vocab_size, output_dim=embed_dim) >>> x = np.random.choice(vocab_size, batch_size) >>> y = test_utils.simple_forward(op, embed_data=x, embed_weight=word_vecs) >>> y_np = word_vecs[x] >>> test_utils.almost_equal(y, y_np) True
-
mxnet.symbol.
Flatten
(data=None, name=None, attr=None, out=None, **kwargs)¶ Flattens the input array into a 2-D array by collapsing the higher dimensions. .. note:: Flatten is deprecated. Use flatten instead. For an input array with shape
(d1, d2, ..., dk)
, flatten operation reshapes the input array into an output array of shape(d1, d2*...*dk)
. Note that the behavior of this function is different from numpy.ndarray.flatten, which behaves similar to mxnet.ndarray.reshape((-1,)). Example:x = [[ [1,2,3], [4,5,6], [7,8,9] ], [ [1,2,3], [4,5,6], [7,8,9] ]], flatten(x) = [[ 1., 2., 3., 4., 5., 6., 7., 8., 9.], [ 1., 2., 3., 4., 5., 6., 7., 8., 9.]]
Defined in src/operator/tensor/matrix_op.cc:L249
- Parameters
data (Symbol) – Input array.
name (string, optional.) – Name of the resulting symbol.
- Returns
The result symbol.
- Return type
Examples
Flatten is usually applied before FullyConnected, to reshape the 4D tensor produced by convolutional layers to 2D matrix:
>>> data = Variable('data') # say this is 4D from some conv/pool >>> flatten = Flatten(data=data, name='flat') # now this is 2D >>> SymbolDoc.get_output_shape(flatten, data=(2, 3, 4, 5)) {'flat_output': (2L, 60L)}
>>> test_dims = [(2, 3, 4, 5), (2, 3), (2,)] >>> op = Flatten(name='flat') >>> for dims in test_dims: ... x = test_utils.random_arrays(dims) ... y = test_utils.simple_forward(op, flat_data=x) ... y_np = x.reshape((dims[0], np.prod(dims[1:]).astype('int32'))) ... print('%s: %s' % (dims, test_utils.almost_equal(y, y_np))) (2, 3, 4, 5): True (2, 3): True (2,): True
-
mxnet.symbol.
FullyConnected
(data=None, weight=None, bias=None, num_hidden=_Null, no_bias=_Null, flatten=_Null, name=None, attr=None, out=None, **kwargs)¶ Applies a linear transformation: \(Y = XW^T + b\).
If
flatten
is set to be true, then the shapes are:data: (batch_size, x1, x2, …, xn)
weight: (num_hidden, x1 * x2 * … * xn)
bias: (num_hidden,)
out: (batch_size, num_hidden)
If
flatten
is set to be false, then the shapes are:data: (x1, x2, …, xn, input_dim)
weight: (num_hidden, input_dim)
bias: (num_hidden,)
out: (x1, x2, …, xn, num_hidden)
The learnable parameters include both
weight
andbias
.If
no_bias
is set to be true, then thebias
term is ignored.Note
The sparse support for FullyConnected is limited to forward evaluation with row_sparse weight and bias, where the length of weight.indices and bias.indices must be equal to num_hidden. This could be useful for model inference with row_sparse weights trained with importance sampling or noise contrastive estimation.
To compute linear transformation with ‘csr’ sparse data, sparse.dot is recommended instead of sparse.FullyConnected.
Defined in src/operator/nn/fully_connected.cc:L286
- Parameters
data (Symbol) – Input data.
weight (Symbol) – Weight matrix.
bias (Symbol) – Bias parameter.
num_hidden (int, required) – Number of hidden nodes of the output.
no_bias (boolean, optional, default=0) – Whether to disable bias parameter.
flatten (boolean, optional, default=1) – Whether to collapse all but the first axis of the input data tensor.
name (string, optional.) – Name of the resulting symbol.
- Returns
The result symbol.
- Return type
Examples
Construct a fully connected operator with target dimension 512.
>>> data = Variable('data') # or some constructed NN >>> op = FullyConnected(data=data, ... num_hidden=512, ... name='FC1') >>> op <Symbol FC1> >>> SymbolDoc.get_output_shape(op, data=(128, 100)) {'FC1_output': (128L, 512L)}
A simple 3-layer MLP with ReLU activation:
>>> net = Variable('data') >>> for i, dim in enumerate([128, 64]): ... net = FullyConnected(data=net, num_hidden=dim, name='FC%d' % i) ... net = Activation(data=net, act_type='relu', name='ReLU%d' % i) >>> # 10-class predictor (e.g. MNIST) >>> net = FullyConnected(data=net, num_hidden=10, name='pred') >>> net <Symbol pred>
>>> dim_in, dim_out = (3, 4) >>> x, w, b = test_utils.random_arrays((10, dim_in), (dim_out, dim_in), (dim_out,)) >>> op = FullyConnected(num_hidden=dim_out, name='FC') >>> out = test_utils.simple_forward(op, FC_data=x, FC_weight=w, FC_bias=b) >>> # numpy implementation of FullyConnected >>> out_np = np.dot(x, w.T) + b >>> test_utils.almost_equal(out, out_np) True
-
mxnet.symbol.
GridGenerator
(data=None, transform_type=_Null, target_shape=_Null, name=None, attr=None, out=None, **kwargs)¶ Generates 2D sampling grid for bilinear sampling.
- Parameters
data (Symbol) – Input data to the function.
transform_type ({'affine', 'warp'}, required) – The type of transformation. For affine, input data should be an affine matrix of size (batch, 6). For warp, input data should be an optical flow of size (batch, 2, h, w).
target_shape (Shape(tuple), optional, default=[0,0]) – Specifies the output shape (H, W). This is required if transformation type is affine. If transformation type is warp, this parameter is ignored.
name (string, optional.) – Name of the resulting symbol.
- Returns
The result symbol.
- Return type
-
mxnet.symbol.
GroupNorm
(data=None, gamma=None, beta=None, num_groups=_Null, eps=_Null, output_mean_var=_Null, name=None, attr=None, out=None, **kwargs)¶ Group normalization.
The input channels are separated into
num_groups
groups, each containingnum_channels / num_groups
channels. The mean and standard-deviation are calculated separately over the each group.\[data = data.reshape((N, num_groups, C // num_groups, ...)) out = \frac{data - mean(data, axis)}{\sqrt{var(data, axis) + \epsilon}} * gamma + beta\]Both
gamma
andbeta
are learnable parameters.Defined in src/operator/nn/group_norm.cc:L76
- Parameters
data (Symbol) – Input data
gamma (Symbol) – gamma array
beta (Symbol) – beta array
num_groups (int, optional, default='1') – Total number of groups.
eps (float, optional, default=9.99999975e-06) – An epsilon parameter to prevent division by 0.
output_mean_var (boolean, optional, default=0) – Output the mean and std calculated along the given axis.
name (string, optional.) – Name of the resulting symbol.
- Returns
The result symbol.
- Return type
-
mxnet.symbol.
IdentityAttachKLSparseReg
(data=None, sparseness_target=_Null, penalty=_Null, momentum=_Null, name=None, attr=None, out=None, **kwargs)¶ Apply a sparse regularization to the output a sigmoid activation function.
- Parameters
data (Symbol) – Input data.
sparseness_target (float, optional, default=0.100000001) – The sparseness target
penalty (float, optional, default=0.00100000005) – The tradeoff parameter for the sparseness penalty
momentum (float, optional, default=0.899999976) – The momentum for running average
name (string, optional.) – Name of the resulting symbol.
- Returns
The result symbol.
- Return type
-
mxnet.symbol.
InstanceNorm
(data=None, gamma=None, beta=None, eps=_Null, name=None, attr=None, out=None, **kwargs)¶ Applies instance normalization to the n-dimensional input array.
This operator takes an n-dimensional input array where (n>2) and normalizes the input using the following formula:
\[out = \frac{x - mean[data]}{ \sqrt{Var[data]} + \epsilon} * gamma + beta\]This layer is similar to batch normalization layer (BatchNorm) with two differences: first, the normalization is carried out per example (instance), not over a batch. Second, the same normalization is applied both at test and train time. This operation is also known as contrast normalization.
If the input data is of shape [batch, channel, spacial_dim1, spacial_dim2, …], gamma and beta parameters must be vectors of shape [channel].
This implementation is based on this paper 1
- 1
Instance Normalization: The Missing Ingredient for Fast Stylization, D. Ulyanov, A. Vedaldi, V. Lempitsky, 2016 (arXiv:1607.08022v2).
Examples:
// Input of shape (2,1,2) x = [[[ 1.1, 2.2]], [[ 3.3, 4.4]]] // gamma parameter of length 1 gamma = [1.5] // beta parameter of length 1 beta = [0.5] // Instance normalization is calculated with the above formula InstanceNorm(x,gamma,beta) = [[[-0.997527 , 1.99752665]], [[-0.99752653, 1.99752724]]]
Defined in src/operator/instance_norm.cc:L94
- Parameters
data (Symbol) – An n-dimensional input array (n > 2) of the form [batch, channel, spatial_dim1, spatial_dim2, …].
gamma (Symbol) – A vector of length ‘channel’, which multiplies the normalized input.
beta (Symbol) – A vector of length ‘channel’, which is added to the product of the normalized input and the weight.
eps (float, optional, default=0.00100000005) – An epsilon parameter to prevent division by 0.
name (string, optional.) – Name of the resulting symbol.
- Returns
The result symbol.
- Return type
-
mxnet.symbol.
L2Normalization
(data=None, eps=_Null, mode=_Null, name=None, attr=None, out=None, **kwargs)¶ Normalize the input array using the L2 norm.
For 1-D NDArray, it computes:
out = data / sqrt(sum(data ** 2) + eps)
For N-D NDArray, if the input array has shape (N, N, …, N),
with
mode
=instance
, it normalizes each instance in the multidimensional array by its L2 norm.:for i in 0...N out[i,:,:,...,:] = data[i,:,:,...,:] / sqrt(sum(data[i,:,:,...,:] ** 2) + eps)
with
mode
=channel
, it normalizes each channel in the array by its L2 norm.:for i in 0...N out[:,i,:,...,:] = data[:,i,:,...,:] / sqrt(sum(data[:,i,:,...,:] ** 2) + eps)
with
mode
=spatial
, it normalizes the cross channel norm for each position in the array by its L2 norm.:for dim in 2...N for i in 0...N out[.....,i,...] = take(out, indices=i, axis=dim) / sqrt(sum(take(out, indices=i, axis=dim) ** 2) + eps) -dim-
Example:
x = [[[1,2], [3,4]], [[2,2], [5,6]]] L2Normalization(x, mode='instance') =[[[ 0.18257418 0.36514837] [ 0.54772252 0.73029673]] [[ 0.24077171 0.24077171] [ 0.60192931 0.72231513]]] L2Normalization(x, mode='channel') =[[[ 0.31622776 0.44721359] [ 0.94868326 0.89442718]] [[ 0.37139067 0.31622776] [ 0.92847669 0.94868326]]] L2Normalization(x, mode='spatial') =[[[ 0.44721359 0.89442718] [ 0.60000002 0.80000001]] [[ 0.70710677 0.70710677] [ 0.6401844 0.76822126]]]
Defined in src/operator/l2_normalization.cc:L195
- Parameters
data (Symbol) – Input array to normalize.
eps (float, optional, default=1.00000001e-10) – A small constant for numerical stability.
mode ({'channel', 'instance', 'spatial'},optional, default='instance') – Specify the dimension along which to compute L2 norm.
name (string, optional.) – Name of the resulting symbol.
- Returns
The result symbol.
- Return type
-
mxnet.symbol.
LRN
(data=None, alpha=_Null, beta=_Null, knorm=_Null, nsize=_Null, name=None, attr=None, out=None, **kwargs)¶ Applies local response normalization to the input.
The local response normalization layer performs “lateral inhibition” by normalizing over local input regions.
If \(a_{x,y}^{i}\) is the activity of a neuron computed by applying kernel \(i\) at position \((x, y)\) and then applying the ReLU nonlinearity, the response-normalized activity \(b_{x,y}^{i}\) is given by the expression:
\[b_{x,y}^{i} = \frac{a_{x,y}^{i}}{\Bigg({k + \frac{\alpha}{n} \sum_{j=max(0, i-\frac{n}{2})}^{min(N-1, i+\frac{n}{2})} (a_{x,y}^{j})^{2}}\Bigg)^{\beta}}\]where the sum runs over \(n\) “adjacent” kernel maps at the same spatial position, and \(N\) is the total number of kernels in the layer.
Defined in src/operator/nn/lrn.cc:L157
- Parameters
data (Symbol) – Input data to LRN
alpha (float, optional, default=9.99999975e-05) – The variance scaling parameter \(lpha\) in the LRN expression.
beta (float, optional, default=0.75) – The power parameter \(eta\) in the LRN expression.
knorm (float, optional, default=2) – The parameter \(k\) in the LRN expression.
nsize (int (non-negative), required) – normalization window width in elements.
name (string, optional.) – Name of the resulting symbol.
- Returns
The result symbol.
- Return type
-
mxnet.symbol.
LayerNorm
(data=None, gamma=None, beta=None, axis=_Null, eps=_Null, output_mean_var=_Null, name=None, attr=None, out=None, **kwargs)¶ Layer normalization.
Normalizes the channels of the input tensor by mean and variance, and applies a scale
gamma
as well as offsetbeta
.Assume the input has more than one dimension and we normalize along axis 1. We first compute the mean and variance along this axis and then compute the normalized output, which has the same shape as input, as following:
\[out = \frac{data - mean(data, axis)}{\sqrt{var(data, axis) + \epsilon}} * gamma + beta\]Both
gamma
andbeta
are learnable parameters.Unlike BatchNorm and InstanceNorm, the mean and var are computed along the channel dimension.
Assume the input has size k on axis 1, then both
gamma
andbeta
have shape (k,). Ifoutput_mean_var
is set to be true, then outputs bothdata_mean
anddata_std
. Note that no gradient will be passed through these two outputs.The parameter
axis
specifies which axis of the input shape denotes the ‘channel’ (separately normalized groups). The default is -1, which sets the channel axis to be the last item in the input shape.Defined in src/operator/nn/layer_norm.cc:L201
- Parameters
data (Symbol) – Input data to layer normalization
gamma (Symbol) – gamma array
beta (Symbol) – beta array
axis (int, optional, default='-1') – The axis to perform layer normalization. Usually, this should be be axis of the channel dimension. Negative values means indexing from right to left.
eps (float, optional, default=9.99999975e-06) – An epsilon parameter to prevent division by 0.
output_mean_var (boolean, optional, default=0) – Output the mean and std calculated along the given axis.
name (string, optional.) – Name of the resulting symbol.
- Returns
The result symbol.
- Return type
-
mxnet.symbol.
LeakyReLU
(data=None, gamma=None, act_type=_Null, slope=_Null, lower_bound=_Null, upper_bound=_Null, name=None, attr=None, out=None, **kwargs)¶ Applies Leaky rectified linear unit activation element-wise to the input.
Leaky ReLUs attempt to fix the “dying ReLU” problem by allowing a small slope when the input is negative and has a slope of one when input is positive.
The following modified ReLU Activation functions are supported:
elu: Exponential Linear Unit. y = x > 0 ? x : slope * (exp(x)-1)
selu: Scaled Exponential Linear Unit. y = lambda * (x > 0 ? x : alpha * (exp(x) - 1)) where lambda = 1.0507009873554804934193349852946 and alpha = 1.6732632423543772848170429916717.
leaky: Leaky ReLU. y = x > 0 ? x : slope * x
prelu: Parametric ReLU. This is same as leaky except that slope is learnt during training.
rrelu: Randomized ReLU. same as leaky but the slope is uniformly and randomly chosen from [lower_bound, upper_bound) for training, while fixed to be (lower_bound+upper_bound)/2 for inference.
Defined in src/operator/leaky_relu.cc:L162
- Parameters
data (Symbol) – Input data to activation function.
gamma (Symbol) – Input data to activation function.
act_type ({'elu', 'gelu', 'leaky', 'prelu', 'rrelu', 'selu'},optional, default='leaky') – Activation function to be applied.
slope (float, optional, default=0.25) – Init slope for the activation. (For leaky and elu only)
lower_bound (float, optional, default=0.125) – Lower bound of random slope. (For rrelu only)
upper_bound (float, optional, default=0.333999991) – Upper bound of random slope. (For rrelu only)
name (string, optional.) – Name of the resulting symbol.
- Returns
The result symbol.
- Return type
-
mxnet.symbol.
LinearRegressionOutput
(data=None, label=None, grad_scale=_Null, name=None, attr=None, out=None, **kwargs)¶ Computes and optimizes for squared loss during backward propagation. Just outputs
data
during forward propagation.If \(\hat{y}_i\) is the predicted value of the i-th sample, and \(y_i\) is the corresponding target value, then the squared loss estimated over \(n\) samples is defined as
\(\text{SquaredLoss}(\textbf{Y}, \hat{\textbf{Y}} ) = \frac{1}{n} \sum_{i=0}^{n-1} \lVert \textbf{y}_i - \hat{\textbf{y}}_i \rVert_2\)
Note
Use the LinearRegressionOutput as the final output layer of a net.
The storage type of
label
can bedefault
orcsr
LinearRegressionOutput(default, default) = default
LinearRegressionOutput(default, csr) = default
By default, gradients of this loss function are scaled by factor 1/m, where m is the number of regression outputs of a training example. The parameter grad_scale can be used to change this scale to grad_scale/m.
Defined in src/operator/regression_output.cc:L92
-
mxnet.symbol.
LogisticRegressionOutput
(data=None, label=None, grad_scale=_Null, name=None, attr=None, out=None, **kwargs)¶ Applies a logistic function to the input.
The logistic function, also known as the sigmoid function, is computed as \(\frac{1}{1+exp(-\textbf{x})}\).
Commonly, the sigmoid is used to squash the real-valued output of a linear model \(wTx+b\) into the [0,1] range so that it can be interpreted as a probability. It is suitable for binary classification or probability prediction tasks.
Note
Use the LogisticRegressionOutput as the final output layer of a net.
The storage type of
label
can bedefault
orcsr
LogisticRegressionOutput(default, default) = default
LogisticRegressionOutput(default, csr) = default
The loss function used is the Binary Cross Entropy Loss:
\(-{(y\log(p) + (1 - y)\log(1 - p))}\)
Where y is the ground truth probability of positive outcome for a given example, and p the probability predicted by the model. By default, gradients of this loss function are scaled by factor 1/m, where m is the number of regression outputs of a training example. The parameter grad_scale can be used to change this scale to grad_scale/m.
Defined in src/operator/regression_output.cc:L152
-
mxnet.symbol.
MAERegressionOutput
(data=None, label=None, grad_scale=_Null, name=None, attr=None, out=None, **kwargs)¶ Computes mean absolute error of the input.
MAE is a risk metric corresponding to the expected value of the absolute error.
If \(\hat{y}_i\) is the predicted value of the i-th sample, and \(y_i\) is the corresponding target value, then the mean absolute error (MAE) estimated over \(n\) samples is defined as
\(\text{MAE}(\textbf{Y}, \hat{\textbf{Y}} ) = \frac{1}{n} \sum_{i=0}^{n-1} \lVert \textbf{y}_i - \hat{\textbf{y}}_i \rVert_1\)
Note
Use the MAERegressionOutput as the final output layer of a net.
The storage type of
label
can bedefault
orcsr
MAERegressionOutput(default, default) = default
MAERegressionOutput(default, csr) = default
By default, gradients of this loss function are scaled by factor 1/m, where m is the number of regression outputs of a training example. The parameter grad_scale can be used to change this scale to grad_scale/m.
Defined in src/operator/regression_output.cc:L120
-
mxnet.symbol.
MakeLoss
(data=None, grad_scale=_Null, valid_thresh=_Null, normalization=_Null, name=None, attr=None, out=None, **kwargs)¶ Make your own loss function in network construction.
This operator accepts a customized loss function symbol as a terminal loss and the symbol should be an operator with no backward dependency. The output of this function is the gradient of loss with respect to the input data.
For example, if you are a making a cross entropy loss function. Assume
out
is the predicted output andlabel
is the true label, then the cross entropy can be defined as:cross_entropy = label * log(out) + (1 - label) * log(1 - out) loss = MakeLoss(cross_entropy)
We will need to use
MakeLoss
when we are creating our own loss function or we want to combine multiple loss functions. Also we may want to stop some variables’ gradients from backpropagation. See more detail inBlockGrad
orstop_gradient
.In addition, we can give a scale to the loss by setting
grad_scale
, so that the gradient of the loss will be rescaled in the backpropagation.Note
This operator should be used as a Symbol instead of NDArray.
Defined in src/operator/make_loss.cc:L70
- Parameters
data (Symbol) – Input array.
grad_scale (float, optional, default=1) – Gradient scale as a supplement to unary and binary operators
valid_thresh (float, optional, default=0) – clip each element in the array to 0 when it is less than
valid_thresh
. This is used whennormalization
is set to'valid'
.normalization ({'batch', 'null', 'valid'},optional, default='null') – If this is set to null, the output gradient will not be normalized. If this is set to batch, the output gradient will be divided by the batch size. If this is set to valid, the output gradient will be divided by the number of valid input elements.
name (string, optional.) – Name of the resulting symbol.
- Returns
The result symbol.
- Return type
-
mxnet.symbol.
Pad
(data=None, mode=_Null, pad_width=_Null, constant_value=_Null, name=None, attr=None, out=None, **kwargs)¶ Pads an input array with a constant or edge values of the array.
Note
Pad is deprecated. Use pad instead.
Note
Current implementation only supports 4D and 5D input arrays with padding applied only on axes 1, 2 and 3. Expects axes 4 and 5 in pad_width to be zero.
This operation pads an input array with either a constant_value or edge values along each axis of the input array. The amount of padding is specified by pad_width.
pad_width is a tuple of integer padding widths for each axis of the format
(before_1, after_1, ... , before_N, after_N)
. The pad_width should be of length2*N
whereN
is the number of dimensions of the array.For dimension
N
of the input array,before_N
andafter_N
indicates how many values to add before and after the elements of the array along dimensionN
. The widths of the higher two dimensionsbefore_1
,after_1
,before_2
,after_2
must be 0.Example:
x = [[[[ 1. 2. 3.] [ 4. 5. 6.]] [[ 7. 8. 9.] [ 10. 11. 12.]]] [[[ 11. 12. 13.] [ 14. 15. 16.]] [[ 17. 18. 19.] [ 20. 21. 22.]]]] pad(x,mode="edge", pad_width=(0,0,0,0,1,1,1,1)) = [[[[ 1. 1. 2. 3. 3.] [ 1. 1. 2. 3. 3.] [ 4. 4. 5. 6. 6.] [ 4. 4. 5. 6. 6.]] [[ 7. 7. 8. 9. 9.] [ 7. 7. 8. 9. 9.] [ 10. 10. 11. 12. 12.] [ 10. 10. 11. 12. 12.]]] [[[ 11. 11. 12. 13. 13.] [ 11. 11. 12. 13. 13.] [ 14. 14. 15. 16. 16.] [ 14. 14. 15. 16. 16.]] [[ 17. 17. 18. 19. 19.] [ 17. 17. 18. 19. 19.] [ 20. 20. 21. 22. 22.] [ 20. 20. 21. 22. 22.]]]] pad(x, mode="constant", constant_value=0, pad_width=(0,0,0,0,1,1,1,1)) = [[[[ 0. 0. 0. 0. 0.] [ 0. 1. 2. 3. 0.] [ 0. 4. 5. 6. 0.] [ 0. 0. 0. 0. 0.]] [[ 0. 0. 0. 0. 0.] [ 0. 7. 8. 9. 0.] [ 0. 10. 11. 12. 0.] [ 0. 0. 0. 0. 0.]]] [[[ 0. 0. 0. 0. 0.] [ 0. 11. 12. 13. 0.] [ 0. 14. 15. 16. 0.] [ 0. 0. 0. 0. 0.]] [[ 0. 0. 0. 0. 0.] [ 0. 17. 18. 19. 0.] [ 0. 20. 21. 22. 0.] [ 0. 0. 0. 0. 0.]]]]
Defined in src/operator/pad.cc:L765
- Parameters
data (Symbol) – An n-dimensional input array.
mode ({'constant', 'edge', 'reflect'}, required) – Padding type to use. “constant” pads with constant_value “edge” pads using the edge values of the input array “reflect” pads by reflecting values with respect to the edges.
pad_width (Shape(tuple), required) – Widths of the padding regions applied to the edges of each axis. It is a tuple of integer padding widths for each axis of the format
(before_1, after_1, ... , before_N, after_N)
. It should be of length2*N
whereN
is the number of dimensions of the array.This is equivalent to pad_width in numpy.pad, but flattened.constant_value (double, optional, default=0) – The value used for padding when mode is “constant”.
name (string, optional.) – Name of the resulting symbol.
- Returns
The result symbol.
- Return type
-
mxnet.symbol.
Pooling
(data=None, kernel=_Null, pool_type=_Null, global_pool=_Null, cudnn_off=_Null, pooling_convention=_Null, stride=_Null, pad=_Null, p_value=_Null, count_include_pad=_Null, layout=_Null, name=None, attr=None, out=None, **kwargs)¶ Performs pooling on the input.
The shapes for 1-D pooling are
data and out: (batch_size, channel, width) (NCW layout) or (batch_size, width, channel) (NWC layout),
The shapes for 2-D pooling are
data and out: (batch_size, channel, height, width) (NCHW layout) or (batch_size, height, width, channel) (NHWC layout),
out_height = f(height, kernel[0], pad[0], stride[0]) out_width = f(width, kernel[1], pad[1], stride[1])
The definition of f depends on
pooling_convention
, which has two options:valid (default):
f(x, k, p, s) = floor((x+2*p-k)/s)+1
full, which is compatible with Caffe:
f(x, k, p, s) = ceil((x+2*p-k)/s)+1
When
global_pool
is set to be true, then global pooling is performed. It will resetkernel=(height, width)
and set the appropiate padding to 0.Three pooling options are supported by
pool_type
:avg: average pooling
max: max pooling
sum: sum pooling
lp: Lp pooling
For 3-D pooling, an additional depth dimension is added before height. Namely the input data and output will have shape (batch_size, channel, depth, height, width) (NCDHW layout) or (batch_size, depth, height, width, channel) (NDHWC layout).
Notes on Lp pooling:
Lp pooling was first introduced by this paper: https://arxiv.org/pdf/1204.3968.pdf. L-1 pooling is simply sum pooling, while L-inf pooling is simply max pooling. We can see that Lp pooling stands between those two, in practice the most common value for p is 2.
For each window
X
, the mathematical expression for Lp pooling is:\(f(X) = \sqrt[p]{\sum_{x}^{X} x^p}\)
Defined in src/operator/nn/pooling.cc:L416
- Parameters
data (Symbol) – Input data to the pooling operator.
kernel (Shape(tuple), optional, default=[]) – Pooling kernel size: (y, x) or (d, y, x)
pool_type ({'avg', 'lp', 'max', 'sum'},optional, default='max') – Pooling type to be applied.
global_pool (boolean, optional, default=0) – Ignore kernel size, do global pooling based on current input feature map.
cudnn_off (boolean, optional, default=0) – Turn off cudnn pooling and use MXNet pooling operator.
pooling_convention ({'full', 'same', 'valid'},optional, default='valid') – Pooling convention to be applied.
stride (Shape(tuple), optional, default=[]) – Stride: for pooling (y, x) or (d, y, x). Defaults to 1 for each dimension.
pad (Shape(tuple), optional, default=[]) – Pad for pooling: (y, x) or (d, y, x). Defaults to no padding.
p_value (int or None, optional, default='None') – Value of p for Lp pooling, can be 1 or 2, required for Lp Pooling.
count_include_pad (boolean or None, optional, default=None) – Only used for AvgPool, specify whether to count padding elements for averagecalculation. For example, with a 5*5 kernel on a 3*3 corner of a image,the sum of the 9 valid elements will be divided by 25 if this is set to true,or it will be divided by 9 if this is set to false. Defaults to true.
layout ({None, 'NCDHW', 'NCHW', 'NCW', 'NDHWC', 'NHWC', 'NWC'},optional, default='None') – Set layout for input and output. Empty for default layout: NCW for 1d, NCHW for 2d and NCDHW for 3d.
name (string, optional.) – Name of the resulting symbol.
- Returns
The result symbol.
- Return type
-
mxnet.symbol.
Pooling_v1
(data=None, kernel=_Null, pool_type=_Null, global_pool=_Null, pooling_convention=_Null, stride=_Null, pad=_Null, name=None, attr=None, out=None, **kwargs)¶ This operator is DEPRECATED. Perform pooling on the input.
The shapes for 2-D pooling is
data: (batch_size, channel, height, width)
out: (batch_size, num_filter, out_height, out_width), with:
out_height = f(height, kernel[0], pad[0], stride[0]) out_width = f(width, kernel[1], pad[1], stride[1])
The definition of f depends on
pooling_convention
, which has two options:valid (default):
f(x, k, p, s) = floor((x+2*p-k)/s)+1
full, which is compatible with Caffe:
f(x, k, p, s) = ceil((x+2*p-k)/s)+1
But
global_pool
is set to be true, then do a global pooling, namely resetkernel=(height, width)
.Three pooling options are supported by
pool_type
:avg: average pooling
max: max pooling
sum: sum pooling
1-D pooling is special case of 2-D pooling with weight=1 and kernel[1]=1.
For 3-D pooling, an additional depth dimension is added before height. Namely the input data will have shape (batch_size, channel, depth, height, width).
Defined in src/operator/pooling_v1.cc:L103
- Parameters
data (Symbol) – Input data to the pooling operator.
kernel (Shape(tuple), optional, default=[]) – pooling kernel size: (y, x) or (d, y, x)
pool_type ({'avg', 'max', 'sum'},optional, default='max') – Pooling type to be applied.
global_pool (boolean, optional, default=0) – Ignore kernel size, do global pooling based on current input feature map.
pooling_convention ({'full', 'valid'},optional, default='valid') – Pooling convention to be applied.
stride (Shape(tuple), optional, default=[]) – stride: for pooling (y, x) or (d, y, x)
pad (Shape(tuple), optional, default=[]) – pad for pooling: (y, x) or (d, y, x)
name (string, optional.) – Name of the resulting symbol.
- Returns
The result symbol.
- Return type
-
mxnet.symbol.
RNN
(data=None, parameters=None, state=None, state_cell=None, sequence_length=None, state_size=_Null, num_layers=_Null, bidirectional=_Null, mode=_Null, p=_Null, state_outputs=_Null, projection_size=_Null, lstm_state_clip_min=_Null, lstm_state_clip_max=_Null, lstm_state_clip_nan=_Null, use_sequence_length=_Null, name=None, attr=None, out=None, **kwargs)¶ Applies recurrent layers to input data. Currently, vanilla RNN, LSTM and GRU are implemented, with both multi-layer and bidirectional support.
When the input data is of type float32 and the environment variables MXNET_CUDA_ALLOW_TENSOR_CORE and MXNET_CUDA_TENSOR_OP_MATH_ALLOW_CONVERSION are set to 1, this operator will try to use pseudo-float16 precision (float32 math with float16 I/O) precision in order to use Tensor Cores on suitable NVIDIA GPUs. This can sometimes give significant speedups.
Vanilla RNN
Applies a single-gate recurrent layer to input X. Two kinds of activation function are supported: ReLU and Tanh.
With ReLU activation function:
\[h_t = relu(W_{ih} * x_t + b_{ih} + W_{hh} * h_{(t-1)} + b_{hh})\]With Tanh activtion function:
\[h_t = \tanh(W_{ih} * x_t + b_{ih} + W_{hh} * h_{(t-1)} + b_{hh})\]Reference paper: Finding structure in time - Elman, 1988. https://crl.ucsd.edu/~elman/Papers/fsit.pdf
LSTM
Long Short-Term Memory - Hochreiter, 1997. http://www.bioinf.jku.at/publications/older/2604.pdf
\[\begin{split}\begin{array}{ll} i_t = \mathrm{sigmoid}(W_{ii} x_t + b_{ii} + W_{hi} h_{(t-1)} + b_{hi}) \\ f_t = \mathrm{sigmoid}(W_{if} x_t + b_{if} + W_{hf} h_{(t-1)} + b_{hf}) \\ g_t = \tanh(W_{ig} x_t + b_{ig} + W_{hc} h_{(t-1)} + b_{hg}) \\ o_t = \mathrm{sigmoid}(W_{io} x_t + b_{io} + W_{ho} h_{(t-1)} + b_{ho}) \\ c_t = f_t * c_{(t-1)} + i_t * g_t \\ h_t = o_t * \tanh(c_t) \end{array}\end{split}\]With the projection size being set, LSTM could use the projection feature to reduce the parameters size and give some speedups without significant damage to the accuracy.
Long Short-Term Memory Based Recurrent Neural Network Architectures for Large Vocabulary Speech Recognition - Sak et al. 2014. https://arxiv.org/abs/1402.1128
\[\begin{split}\begin{array}{ll} i_t = \mathrm{sigmoid}(W_{ii} x_t + b_{ii} + W_{ri} r_{(t-1)} + b_{ri}) \\ f_t = \mathrm{sigmoid}(W_{if} x_t + b_{if} + W_{rf} r_{(t-1)} + b_{rf}) \\ g_t = \tanh(W_{ig} x_t + b_{ig} + W_{rc} r_{(t-1)} + b_{rg}) \\ o_t = \mathrm{sigmoid}(W_{io} x_t + b_{o} + W_{ro} r_{(t-1)} + b_{ro}) \\ c_t = f_t * c_{(t-1)} + i_t * g_t \\ h_t = o_t * \tanh(c_t) r_t = W_{hr} h_t \end{array}\end{split}\]GRU
Gated Recurrent Unit - Cho et al. 2014. http://arxiv.org/abs/1406.1078
The definition of GRU here is slightly different from paper but compatible with CUDNN.
\[\begin{split}\begin{array}{ll} r_t = \mathrm{sigmoid}(W_{ir} x_t + b_{ir} + W_{hr} h_{(t-1)} + b_{hr}) \\ z_t = \mathrm{sigmoid}(W_{iz} x_t + b_{iz} + W_{hz} h_{(t-1)} + b_{hz}) \\ n_t = \tanh(W_{in} x_t + b_{in} + r_t * (W_{hn} h_{(t-1)}+ b_{hn})) \\ h_t = (1 - z_t) * n_t + z_t * h_{(t-1)} \\ \end{array}\end{split}\]Defined in src/operator/rnn.cc:L375
- Parameters
data (Symbol) – Input data to RNN
parameters (Symbol) – Vector of all RNN trainable parameters concatenated
state (Symbol) – initial hidden state of the RNN
state_cell (Symbol) – initial cell state for LSTM networks (only for LSTM)
sequence_length (Symbol) – Vector of valid sequence lengths for each element in batch. (Only used if use_sequence_length kwarg is True)
state_size (int (non-negative), required) – size of the state for each layer
num_layers (int (non-negative), required) – number of stacked layers
bidirectional (boolean, optional, default=0) – whether to use bidirectional recurrent layers
mode ({'gru', 'lstm', 'rnn_relu', 'rnn_tanh'}, required) – the type of RNN to compute
p (float, optional, default=0) – drop rate of the dropout on the outputs of each RNN layer, except the last layer.
state_outputs (boolean, optional, default=0) – Whether to have the states as symbol outputs.
projection_size (int or None, optional, default='None') – size of project size
lstm_state_clip_min (double or None, optional, default=None) – Minimum clip value of LSTM states. This option must be used together with lstm_state_clip_max.
lstm_state_clip_max (double or None, optional, default=None) – Maximum clip value of LSTM states. This option must be used together with lstm_state_clip_min.
lstm_state_clip_nan (boolean, optional, default=0) – Whether to stop NaN from propagating in state by clipping it to min/max. If clipping range is not specified, this option is ignored.
use_sequence_length (boolean, optional, default=0) – If set to true, this layer takes in an extra input parameter sequence_length to specify variable length sequence
name (string, optional.) – Name of the resulting symbol.
- Returns
The result symbol.
- Return type
-
mxnet.symbol.
ROIPooling
(data=None, rois=None, pooled_size=_Null, spatial_scale=_Null, name=None, attr=None, out=None, **kwargs)¶ Performs region of interest(ROI) pooling on the input array.
ROI pooling is a variant of a max pooling layer, in which the output size is fixed and region of interest is a parameter. Its purpose is to perform max pooling on the inputs of non-uniform sizes to obtain fixed-size feature maps. ROI pooling is a neural-net layer mostly used in training a Fast R-CNN network for object detection.
This operator takes a 4D feature map as an input array and region proposals as rois, then it pools over sub-regions of input and produces a fixed-sized output array regardless of the ROI size.
To crop the feature map accordingly, you can resize the bounding box coordinates by changing the parameters rois and spatial_scale.
The cropped feature maps are pooled by standard max pooling operation to a fixed size output indicated by a pooled_size parameter. batch_size will change to the number of region bounding boxes after ROIPooling.
The size of each region of interest doesn’t have to be perfectly divisible by the number of pooling sections(pooled_size).
Example:
x = [[[[ 0., 1., 2., 3., 4., 5.], [ 6., 7., 8., 9., 10., 11.], [ 12., 13., 14., 15., 16., 17.], [ 18., 19., 20., 21., 22., 23.], [ 24., 25., 26., 27., 28., 29.], [ 30., 31., 32., 33., 34., 35.], [ 36., 37., 38., 39., 40., 41.], [ 42., 43., 44., 45., 46., 47.]]]] // region of interest i.e. bounding box coordinates. y = [[0,0,0,4,4]] // returns array of shape (2,2) according to the given roi with max pooling. ROIPooling(x, y, (2,2), 1.0) = [[[[ 14., 16.], [ 26., 28.]]]] // region of interest is changed due to the change in `spacial_scale` parameter. ROIPooling(x, y, (2,2), 0.7) = [[[[ 7., 9.], [ 19., 21.]]]]
Defined in src/operator/roi_pooling.cc:L224
- Parameters
data (Symbol) – The input array to the pooling operator, a 4D Feature maps
rois (Symbol) – Bounding box coordinates, a 2D array of [[batch_index, x1, y1, x2, y2]], where (x1, y1) and (x2, y2) are top left and bottom right corners of designated region of interest. batch_index indicates the index of corresponding image in the input array
pooled_size (Shape(tuple), required) – ROI pooling output shape (h,w)
spatial_scale (float, required) – Ratio of input feature map height (or w) to raw image height (or w). Equals the reciprocal of total stride in convolutional layers
name (string, optional.) – Name of the resulting symbol.
- Returns
The result symbol.
- Return type
-
mxnet.symbol.
Reshape
(data=None, shape=_Null, reverse=_Null, target_shape=_Null, keep_highest=_Null, name=None, attr=None, out=None, **kwargs)¶ Reshapes the input array. .. note::
Reshape
is deprecated, usereshape
Given an array and a shape, this function returns a copy of the array in the new shape. The shape is a tuple of integers such as (2,3,4). The size of the new shape should be same as the size of the input array. Example:reshape([1,2,3,4], shape=(2,2)) = [[1,2], [3,4]]
Some dimensions of the shape can take special values from the set {0, -1, -2, -3, -4}. The significance of each is explained below: -
0
copy this dimension from the input to the output shape.Example:: - input shape = (2,3,4), shape = (4,0,2), output shape = (4,3,2) - input shape = (2,3,4), shape = (2,0,0), output shape = (2,3,4)
-1
infers the dimension of the output shape by using the remainder of the input dimensions keeping the size of the new array same as that of the input array. At most one dimension of shape can be -1. Example:: - input shape = (2,3,4), shape = (6,1,-1), output shape = (6,1,4) - input shape = (2,3,4), shape = (3,-1,8), output shape = (3,1,8) - input shape = (2,3,4), shape=(-1,), output shape = (24,)-2
copy all/remainder of the input dimensions to the output shape. Example:: - input shape = (2,3,4), shape = (-2,), output shape = (2,3,4) - input shape = (2,3,4), shape = (2,-2), output shape = (2,3,4) - input shape = (2,3,4), shape = (-2,1,1), output shape = (2,3,4,1,1)-3
use the product of two consecutive dimensions of the input shape as the output dimension. Example:: - input shape = (2,3,4), shape = (-3,4), output shape = (6,4) - input shape = (2,3,4,5), shape = (-3,-3), output shape = (6,20) - input shape = (2,3,4), shape = (0,-3), output shape = (2,12) - input shape = (2,3,4), shape = (-3,-2), output shape = (6,4)-4
split one dimension of the input into two dimensions passed subsequent to -4 in shape (can contain -1). Example:: - input shape = (2,3,4), shape = (-4,1,2,-2), output shape =(1,2,3,4) - input shape = (2,3,4), shape = (2,-4,-1,3,-2), output shape = (2,1,3,4)
- If the argument reverse is set to 1, then the special values are inferred from right to left.
Example:: - without reverse=1, for input shape = (10,5,4), shape = (-1,0), output shape would be (40,5) - with reverse=1, output shape will be (50,4).
Defined in src/operator/tensor/matrix_op.cc:L174
- Parameters
data (Symbol) – Input data to reshape.
shape (Shape(tuple), optional, default=[]) – The target shape
reverse (boolean, optional, default=0) – If true then the special values are inferred from right to left
target_shape (Shape(tuple), optional, default=[]) – (Deprecated! Use
shape
instead.) Target new shape. One and only one dim can be 0, in which case it will be inferred from the rest of dimskeep_highest (boolean, optional, default=0) – (Deprecated! Use
shape
instead.) Whether keep the highest dim unchanged.If set to true, then the first dim in target_shape is ignored,and always fixed as inputname (string, optional.) – Name of the resulting symbol.
- Returns
The result symbol.
- Return type
-
mxnet.symbol.
SVMOutput
(data=None, label=None, margin=_Null, regularization_coefficient=_Null, use_linear=_Null, name=None, attr=None, out=None, **kwargs)¶ Computes support vector machine based transformation of the input.
This tutorial demonstrates using SVM as output layer for classification instead of softmax: https://github.com/apache/mxnet/tree/v1.x/example/svm_mnist.
- Parameters
data (Symbol) – Input data for SVM transformation.
label (Symbol) – Class label for the input data.
margin (float, optional, default=1) – The loss function penalizes outputs that lie outside this margin. Default margin is 1.
regularization_coefficient (float, optional, default=1) – Regularization parameter for the SVM. This balances the tradeoff between coefficient size and error.
use_linear (boolean, optional, default=0) – Whether to use L1-SVM objective. L2-SVM objective is used by default.
name (string, optional.) – Name of the resulting symbol.
- Returns
The result symbol.
- Return type
-
mxnet.symbol.
SequenceLast
(data=None, sequence_length=None, use_sequence_length=_Null, axis=_Null, name=None, attr=None, out=None, **kwargs)¶ Takes the last element of a sequence.
This function takes an n-dimensional input array of the form [max_sequence_length, batch_size, other_feature_dims] and returns a (n-1)-dimensional array of the form [batch_size, other_feature_dims].
Parameter sequence_length is used to handle variable-length sequences. sequence_length should be an input array of positive ints of dimension [batch_size]. To use this parameter, set use_sequence_length to True, otherwise each example in the batch is assumed to have the max sequence length.
Note
Alternatively, you can also use take operator.
Example:
x = [[[ 1., 2., 3.], [ 4., 5., 6.], [ 7., 8., 9.]], [[ 10., 11., 12.], [ 13., 14., 15.], [ 16., 17., 18.]], [[ 19., 20., 21.], [ 22., 23., 24.], [ 25., 26., 27.]]] // returns last sequence when sequence_length parameter is not used SequenceLast(x) = [[ 19., 20., 21.], [ 22., 23., 24.], [ 25., 26., 27.]] // sequence_length is used SequenceLast(x, sequence_length=[1,1,1], use_sequence_length=True) = [[ 1., 2., 3.], [ 4., 5., 6.], [ 7., 8., 9.]] // sequence_length is used SequenceLast(x, sequence_length=[1,2,3], use_sequence_length=True) = [[ 1., 2., 3.], [ 13., 14., 15.], [ 25., 26., 27.]]
Defined in src/operator/sequence_last.cc:L105
- Parameters
data (Symbol) – n-dimensional input array of the form [max_sequence_length, batch_size, other_feature_dims] where n>2
sequence_length (Symbol) – vector of sequence lengths of the form [batch_size]
use_sequence_length (boolean, optional, default=0) – If set to true, this layer takes in an extra input parameter sequence_length to specify variable length sequence
axis (int, optional, default='0') – The sequence axis. Only values of 0 and 1 are currently supported.
name (string, optional.) – Name of the resulting symbol.
- Returns
The result symbol.
- Return type
-
mxnet.symbol.
SequenceMask
(data=None, sequence_length=None, use_sequence_length=_Null, value=_Null, axis=_Null, name=None, attr=None, out=None, **kwargs)¶ Sets all elements outside the sequence to a constant value.
This function takes an n-dimensional input array of the form [max_sequence_length, batch_size, other_feature_dims] and returns an array of the same shape.
Parameter sequence_length is used to handle variable-length sequences. sequence_length should be an input array of positive ints of dimension [batch_size]. To use this parameter, set use_sequence_length to True, otherwise each example in the batch is assumed to have the max sequence length and this operator works as the identity operator.
Example:
x = [[[ 1., 2., 3.], [ 4., 5., 6.]], [[ 7., 8., 9.], [ 10., 11., 12.]], [[ 13., 14., 15.], [ 16., 17., 18.]]] // Batch 1 B1 = [[ 1., 2., 3.], [ 7., 8., 9.], [ 13., 14., 15.]] // Batch 2 B2 = [[ 4., 5., 6.], [ 10., 11., 12.], [ 16., 17., 18.]] // works as identity operator when sequence_length parameter is not used SequenceMask(x) = [[[ 1., 2., 3.], [ 4., 5., 6.]], [[ 7., 8., 9.], [ 10., 11., 12.]], [[ 13., 14., 15.], [ 16., 17., 18.]]] // sequence_length [1,1] means 1 of each batch will be kept // and other rows are masked with default mask value = 0 SequenceMask(x, sequence_length=[1,1], use_sequence_length=True) = [[[ 1., 2., 3.], [ 4., 5., 6.]], [[ 0., 0., 0.], [ 0., 0., 0.]], [[ 0., 0., 0.], [ 0., 0., 0.]]] // sequence_length [2,3] means 2 of batch B1 and 3 of batch B2 will be kept // and other rows are masked with value = 1 SequenceMask(x, sequence_length=[2,3], use_sequence_length=True, value=1) = [[[ 1., 2., 3.], [ 4., 5., 6.]], [[ 7., 8., 9.], [ 10., 11., 12.]], [[ 1., 1., 1.], [ 16., 17., 18.]]]
Defined in src/operator/sequence_mask.cc:L185
- Parameters
data (Symbol) – n-dimensional input array of the form [max_sequence_length, batch_size, other_feature_dims] where n>2
sequence_length (Symbol) – vector of sequence lengths of the form [batch_size]
use_sequence_length (boolean, optional, default=0) – If set to true, this layer takes in an extra input parameter sequence_length to specify variable length sequence
value (float, optional, default=0) – The value to be used as a mask.
axis (int, optional, default='0') – The sequence axis. Only values of 0 and 1 are currently supported.
name (string, optional.) – Name of the resulting symbol.
- Returns
The result symbol.
- Return type
-
mxnet.symbol.
SequenceReverse
(data=None, sequence_length=None, use_sequence_length=_Null, axis=_Null, name=None, attr=None, out=None, **kwargs)¶ Reverses the elements of each sequence.
This function takes an n-dimensional input array of the form [max_sequence_length, batch_size, other_feature_dims] and returns an array of the same shape.
Parameter sequence_length is used to handle variable-length sequences. sequence_length should be an input array of positive ints of dimension [batch_size]. To use this parameter, set use_sequence_length to True, otherwise each example in the batch is assumed to have the max sequence length.
Example:
x = [[[ 1., 2., 3.], [ 4., 5., 6.]], [[ 7., 8., 9.], [ 10., 11., 12.]], [[ 13., 14., 15.], [ 16., 17., 18.]]] // Batch 1 B1 = [[ 1., 2., 3.], [ 7., 8., 9.], [ 13., 14., 15.]] // Batch 2 B2 = [[ 4., 5., 6.], [ 10., 11., 12.], [ 16., 17., 18.]] // returns reverse sequence when sequence_length parameter is not used SequenceReverse(x) = [[[ 13., 14., 15.], [ 16., 17., 18.]], [[ 7., 8., 9.], [ 10., 11., 12.]], [[ 1., 2., 3.], [ 4., 5., 6.]]] // sequence_length [2,2] means 2 rows of // both batch B1 and B2 will be reversed. SequenceReverse(x, sequence_length=[2,2], use_sequence_length=True) = [[[ 7., 8., 9.], [ 10., 11., 12.]], [[ 1., 2., 3.], [ 4., 5., 6.]], [[ 13., 14., 15.], [ 16., 17., 18.]]] // sequence_length [2,3] means 2 of batch B2 and 3 of batch B3 // will be reversed. SequenceReverse(x, sequence_length=[2,3], use_sequence_length=True) = [[[ 7., 8., 9.], [ 16., 17., 18.]], [[ 1., 2., 3.], [ 10., 11., 12.]], [[ 13., 14, 15.], [ 4., 5., 6.]]]
Defined in src/operator/sequence_reverse.cc:L121
- Parameters
data (Symbol) – n-dimensional input array of the form [max_sequence_length, batch_size, other dims] where n>2
sequence_length (Symbol) – vector of sequence lengths of the form [batch_size]
use_sequence_length (boolean, optional, default=0) – If set to true, this layer takes in an extra input parameter sequence_length to specify variable length sequence
axis (int, optional, default='0') – The sequence axis. Only 0 is currently supported.
name (string, optional.) – Name of the resulting symbol.
- Returns
The result symbol.
- Return type
-
mxnet.symbol.
SliceChannel
(data=None, num_outputs=_Null, axis=_Null, squeeze_axis=_Null, name=None, attr=None, out=None, **kwargs)¶ Splits an array along a particular axis into multiple sub-arrays.
Note
SliceChannel
is deprecated. Usesplit
instead.Note that num_outputs should evenly divide the length of the axis along which to split the array.
Example:
x = [[[ 1.] [ 2.]] [[ 3.] [ 4.]] [[ 5.] [ 6.]]] x.shape = (3, 2, 1) y = split(x, axis=1, num_outputs=2) // a list of 2 arrays with shape (3, 1, 1) y = [[[ 1.]] [[ 3.]] [[ 5.]]] [[[ 2.]] [[ 4.]] [[ 6.]]] y[0].shape = (3, 1, 1) z = split(x, axis=0, num_outputs=3) // a list of 3 arrays with shape (1, 2, 1) z = [[[ 1.] [ 2.]]] [[[ 3.] [ 4.]]] [[[ 5.] [ 6.]]] z[0].shape = (1, 2, 1)
squeeze_axis=1 removes the axis with length 1 from the shapes of the output arrays. Note that setting squeeze_axis to
1
removes axis with length 1 only along the axis which it is split. Also squeeze_axis can be set to true only ifinput.shape[axis] == num_outputs
.Example:
z = split(x, axis=0, num_outputs=3, squeeze_axis=1) // a list of 3 arrays with shape (2, 1) z = [[ 1.] [ 2.]] [[ 3.] [ 4.]] [[ 5.] [ 6.]] z[0].shape = (2 ,1 )
Defined in src/operator/slice_channel.cc:L106
- Parameters
data (Symbol) – The input
num_outputs (int, required) – Number of splits. Note that this should evenly divide the length of the axis.
axis (int, optional, default='1') – Axis along which to split.
squeeze_axis (boolean, optional, default=0) – If true, Removes the axis with length 1 from the shapes of the output arrays. Note that setting squeeze_axis to
true
removes axis with length 1 only along the axis which it is split. Also squeeze_axis can be set totrue
only ifinput.shape[axis] == num_outputs
.name (string, optional.) – Name of the resulting symbol.
- Returns
The result symbol.
- Return type
-
mxnet.symbol.
Softmax
(data=None, label=None, grad_scale=_Null, ignore_label=_Null, multi_output=_Null, use_ignore=_Null, preserve_shape=_Null, normalization=_Null, out_grad=_Null, smooth_alpha=_Null, name=None, attr=None, out=None, **kwargs)¶ Computes the gradient of cross entropy loss with respect to softmax output.
This operator computes the gradient in two steps. The cross entropy loss does not actually need to be computed.
Applies softmax function on the input array.
Computes and returns the gradient of cross entropy loss w.r.t. the softmax output.
The softmax function, cross entropy loss and gradient is given by:
Softmax Function:
\[\text{softmax}(x)_i = \frac{exp(x_i)}{\sum_j exp(x_j)}\]Cross Entropy Function:
\[\text{CE(label, output)} = - \sum_i \text{label}_i \log(\text{output}_i)\]The gradient of cross entropy loss w.r.t softmax output:
\[\text{gradient} = \text{output} - \text{label}\]
During forward propagation, the softmax function is computed for each instance in the input array.
For general N-D input arrays with shape \((d_1, d_2, ..., d_n)\). The size is \(s=d_1 \cdot d_2 \cdot \cdot \cdot d_n\). We can use the parameters preserve_shape and multi_output to specify the way to compute softmax:
By default, preserve_shape is
false
. This operator will reshape the input array into a 2-D array with shape \((d_1, \frac{s}{d_1})\) and then compute the softmax function for each row in the reshaped array, and afterwards reshape it back to the original shape \((d_1, d_2, ..., d_n)\).If preserve_shape is
true
, the softmax function will be computed along the last axis (axis =-1
).If multi_output is
true
, the softmax function will be computed along the second axis (axis =1
).
During backward propagation, the gradient of cross-entropy loss w.r.t softmax output array is computed. The provided label can be a one-hot label array or a probability label array.
If the parameter use_ignore is
true
, ignore_label can specify input instances with a particular label to be ignored during backward propagation. This has no effect when softmax `output` has same shape as `label`.Example:
data = [[1,2,3,4],[2,2,2,2],[3,3,3,3],[4,4,4,4]] label = [1,0,2,3] ignore_label = 1 SoftmaxOutput(data=data, label = label,\ multi_output=true, use_ignore=true,\ ignore_label=ignore_label) ## forward softmax output [[ 0.0320586 0.08714432 0.23688284 0.64391428] [ 0.25 0.25 0.25 0.25 ] [ 0.25 0.25 0.25 0.25 ] [ 0.25 0.25 0.25 0.25 ]] ## backward gradient output [[ 0. 0. 0. 0. ] [-0.75 0.25 0.25 0.25] [ 0.25 0.25 -0.75 0.25] [ 0.25 0.25 0.25 -0.75]] ## notice that the first row is all 0 because label[0] is 1, which is equal to ignore_label.
The parameter grad_scale can be used to rescale the gradient, which is often used to give each loss function different weights.
This operator also supports various ways to normalize the gradient by normalization, The normalization is applied if softmax output has different shape than the labels. The normalization mode can be set to the followings:
'null'
: do nothing.'batch'
: divide the gradient by the batch size.'valid'
: divide the gradient by the number of instances which are not ignored.
Defined in src/operator/softmax_output.cc:L242
- Parameters
data (Symbol) – Input array.
label (Symbol) – Ground truth label.
grad_scale (float, optional, default=1) – Scales the gradient by a float factor.
ignore_label (float, optional, default=-1) – The instances whose labels == ignore_label will be ignored during backward, if use_ignore is set to
true
).multi_output (boolean, optional, default=0) – If set to
true
, the softmax function will be computed along axis1
. This is applied when the shape of input array differs from the shape of label array.use_ignore (boolean, optional, default=0) – If set to
true
, the ignore_label value will not contribute to the backward gradient.preserve_shape (boolean, optional, default=0) – If set to
true
, the softmax function will be computed along the last axis (-1
).normalization ({'batch', 'null', 'valid'},optional, default='null') – Normalizes the gradient.
out_grad (boolean, optional, default=0) – Multiplies gradient with output gradient element-wise.
smooth_alpha (float, optional, default=0) – Constant for computing a label smoothed version of cross-entropyfor the backwards pass. This constant gets subtracted from theone-hot encoding of the gold label and distributed uniformly toall other labels.
name (string, optional.) – Name of the resulting symbol.
- Returns
The result symbol.
- Return type
-
mxnet.symbol.
SoftmaxActivation
(data=None, mode=_Null, name=None, attr=None, out=None, **kwargs)¶ Applies softmax activation to input. This is intended for internal layers.
Note
This operator has been deprecated, please use softmax.
If mode =
instance
, this operator will compute a softmax for each instance in the batch. This is the default mode.If mode =
channel
, this operator will compute a k-class softmax at each position of each instance, where k =num_channel
. This mode can only be used when the input array has at least 3 dimensions. This can be used for fully convolutional network, image segmentation, etc.Example:
>>> input_array = mx.nd.array([[3., 0.5, -0.5, 2., 7.], >>> [2., -.4, 7., 3., 0.2]]) >>> softmax_act = mx.nd.SoftmaxActivation(input_array) >>> print softmax_act.asnumpy() [[ 1.78322066e-02 1.46375655e-03 5.38485940e-04 6.56010211e-03 9.73605454e-01] [ 6.56221947e-03 5.95310994e-04 9.73919690e-01 1.78379621e-02 1.08472735e-03]]
Defined in src/operator/nn/softmax_activation.cc:L58
- Parameters
data (Symbol) – The input array.
mode ({'channel', 'instance'},optional, default='instance') – Specifies how to compute the softmax. If set to
instance
, it computes softmax for each instance. If set tochannel
, It computes cross channel softmax for each position of each instance.name (string, optional.) – Name of the resulting symbol.
- Returns
The result symbol.
- Return type
-
mxnet.symbol.
SoftmaxOutput
(data=None, label=None, grad_scale=_Null, ignore_label=_Null, multi_output=_Null, use_ignore=_Null, preserve_shape=_Null, normalization=_Null, out_grad=_Null, smooth_alpha=_Null, name=None, attr=None, out=None, **kwargs)¶ Computes the gradient of cross entropy loss with respect to softmax output.
This operator computes the gradient in two steps. The cross entropy loss does not actually need to be computed.
Applies softmax function on the input array.
Computes and returns the gradient of cross entropy loss w.r.t. the softmax output.
The softmax function, cross entropy loss and gradient is given by:
Softmax Function:
\[\text{softmax}(x)_i = \frac{exp(x_i)}{\sum_j exp(x_j)}\]Cross Entropy Function:
\[\text{CE(label, output)} = - \sum_i \text{label}_i \log(\text{output}_i)\]The gradient of cross entropy loss w.r.t softmax output:
\[\text{gradient} = \text{output} - \text{label}\]
During forward propagation, the softmax function is computed for each instance in the input array.
For general N-D input arrays with shape \((d_1, d_2, ..., d_n)\). The size is \(s=d_1 \cdot d_2 \cdot \cdot \cdot d_n\). We can use the parameters preserve_shape and multi_output to specify the way to compute softmax:
By default, preserve_shape is
false
. This operator will reshape the input array into a 2-D array with shape \((d_1, \frac{s}{d_1})\) and then compute the softmax function for each row in the reshaped array, and afterwards reshape it back to the original shape \((d_1, d_2, ..., d_n)\).If preserve_shape is
true
, the softmax function will be computed along the last axis (axis =-1
).If multi_output is
true
, the softmax function will be computed along the second axis (axis =1
).
During backward propagation, the gradient of cross-entropy loss w.r.t softmax output array is computed. The provided label can be a one-hot label array or a probability label array.
If the parameter use_ignore is
true
, ignore_label can specify input instances with a particular label to be ignored during backward propagation. This has no effect when softmax `output` has same shape as `label`.Example:
data = [[1,2,3,4],[2,2,2,2],[3,3,3,3],[4,4,4,4]] label = [1,0,2,3] ignore_label = 1 SoftmaxOutput(data=data, label = label,\ multi_output=true, use_ignore=true,\ ignore_label=ignore_label) ## forward softmax output [[ 0.0320586 0.08714432 0.23688284 0.64391428] [ 0.25 0.25 0.25 0.25 ] [ 0.25 0.25 0.25 0.25 ] [ 0.25 0.25 0.25 0.25 ]] ## backward gradient output [[ 0. 0. 0. 0. ] [-0.75 0.25 0.25 0.25] [ 0.25 0.25 -0.75 0.25] [ 0.25 0.25 0.25 -0.75]] ## notice that the first row is all 0 because label[0] is 1, which is equal to ignore_label.
The parameter grad_scale can be used to rescale the gradient, which is often used to give each loss function different weights.
This operator also supports various ways to normalize the gradient by normalization, The normalization is applied if softmax output has different shape than the labels. The normalization mode can be set to the followings:
'null'
: do nothing.'batch'
: divide the gradient by the batch size.'valid'
: divide the gradient by the number of instances which are not ignored.
Defined in src/operator/softmax_output.cc:L242
- Parameters
data (Symbol) – Input array.
label (Symbol) – Ground truth label.
grad_scale (float, optional, default=1) – Scales the gradient by a float factor.
ignore_label (float, optional, default=-1) – The instances whose labels == ignore_label will be ignored during backward, if use_ignore is set to
true
).multi_output (boolean, optional, default=0) – If set to
true
, the softmax function will be computed along axis1
. This is applied when the shape of input array differs from the shape of label array.use_ignore (boolean, optional, default=0) – If set to
true
, the ignore_label value will not contribute to the backward gradient.preserve_shape (boolean, optional, default=0) – If set to
true
, the softmax function will be computed along the last axis (-1
).normalization ({'batch', 'null', 'valid'},optional, default='null') – Normalizes the gradient.
out_grad (boolean, optional, default=0) – Multiplies gradient with output gradient element-wise.
smooth_alpha (float, optional, default=0) – Constant for computing a label smoothed version of cross-entropyfor the backwards pass. This constant gets subtracted from theone-hot encoding of the gold label and distributed uniformly toall other labels.
name (string, optional.) – Name of the resulting symbol.
- Returns
The result symbol.
- Return type
-
mxnet.symbol.
SpatialTransformer
(data=None, loc=None, target_shape=_Null, transform_type=_Null, sampler_type=_Null, cudnn_off=_Null, name=None, attr=None, out=None, **kwargs)¶ Applies a spatial transformer to input feature map.
- Parameters
data (Symbol) – Input data to the SpatialTransformerOp.
loc (Symbol) – localisation net, the output dim should be 6 when transform_type is affine. You shold initialize the weight and bias with identity tranform.
target_shape (Shape(tuple), optional, default=[0,0]) – output shape(h, w) of spatial transformer: (y, x)
transform_type ({'affine'}, required) – transformation type
sampler_type ({'bilinear'}, required) – sampling type
cudnn_off (boolean or None, optional, default=None) – whether to turn cudnn off
name (string, optional.) – Name of the resulting symbol.
- Returns
The result symbol.
- Return type
-
mxnet.symbol.
SwapAxis
(data=None, dim1=_Null, dim2=_Null, name=None, attr=None, out=None, **kwargs)¶ Interchanges two axes of an array.
Examples:
x = [[1, 2, 3]]) swapaxes(x, 0, 1) = [[ 1], [ 2], [ 3]] x = [[[ 0, 1], [ 2, 3]], [[ 4, 5], [ 6, 7]]] // (2,2,2) array swapaxes(x, 0, 2) = [[[ 0, 4], [ 2, 6]], [[ 1, 5], [ 3, 7]]]
Defined in src/operator/swapaxis.cc:L69
-
mxnet.symbol.
UpSampling
(*data, **kwargs)¶ Upsamples the given input data.
Two algorithms (
sample_type
) are available for upsampling:Nearest Neighbor
Bilinear
Nearest Neighbor Upsampling
Input data is expected to be NCHW.
Example:
x = [[[[1. 1. 1.] [1. 1. 1.] [1. 1. 1.]]]] UpSampling(x, scale=2, sample_type='nearest') = [[[[1. 1. 1. 1. 1. 1.] [1. 1. 1. 1. 1. 1.] [1. 1. 1. 1. 1. 1.] [1. 1. 1. 1. 1. 1.] [1. 1. 1. 1. 1. 1.] [1. 1. 1. 1. 1. 1.]]]]
Bilinear Upsampling
Uses deconvolution algorithm under the hood. You need provide both input data and the kernel.
Input data is expected to be NCHW.
num_filter is expected to be same as the number of channels.
Example:
x = [[[[1. 1. 1.] [1. 1. 1.] [1. 1. 1.]]]] w = [[[[1. 1. 1. 1.] [1. 1. 1. 1.] [1. 1. 1. 1.] [1. 1. 1. 1.]]]] UpSampling(x, w, scale=2, sample_type='bilinear', num_filter=1) = [[[[1. 2. 2. 2. 2. 1.] [2. 4. 4. 4. 4. 2.] [2. 4. 4. 4. 4. 2.] [2. 4. 4. 4. 4. 2.] [2. 4. 4. 4. 4. 2.] [1. 2. 2. 2. 2. 1.]]]]
Defined in src/operator/nn/upsampling.cc:L172 This function support variable length of positional input.
- Parameters
data (Symbol[]) – Array of tensors to upsample. For bilinear upsampling, there should be 2 inputs - 1 data and 1 weight.
scale (int, required) – Up sampling scale
num_filter (int, optional, default='0') – Input filter. Only used by bilinear sample_type.Since bilinear upsampling uses deconvolution, num_filters is set to the number of channels.
sample_type ({'bilinear', 'nearest'}, required) – upsampling method
multi_input_mode ({'concat', 'sum'},optional, default='concat') – How to handle multiple input. concat means concatenate upsampled images along the channel dimension. sum means add all images together, only available for nearest neighbor upsampling.
workspace (long (non-negative), optional, default=512) – Tmp workspace for deconvolution (MB)
name (string, optional.) – Name of the resulting symbol.
- Returns
The result symbol.
- Return type
-
mxnet.symbol.
abs
(data=None, name=None, attr=None, out=None, **kwargs)¶ Returns element-wise absolute value of the input.
Example:
abs([-2, 0, 3]) = [2, 0, 3]
The storage type of
abs
output depends upon the input storage type:abs(default) = default
abs(row_sparse) = row_sparse
abs(csr) = csr
Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L720
-
mxnet.symbol.
adam_update
(weight=None, grad=None, mean=None, var=None, lr=_Null, beta1=_Null, beta2=_Null, epsilon=_Null, wd=_Null, rescale_grad=_Null, clip_gradient=_Null, lazy_update=_Null, name=None, attr=None, out=None, **kwargs)¶ Update function for Adam optimizer. Adam is seen as a generalization of AdaGrad.
Adam update consists of the following steps, where g represents gradient and m, v are 1st and 2nd order moment estimates (mean and variance).
\[\begin{split}g_t = \nabla J(W_{t-1})\\ m_t = \beta_1 m_{t-1} + (1 - \beta_1) g_t\\ v_t = \beta_2 v_{t-1} + (1 - \beta_2) g_t^2\\ W_t = W_{t-1} - \alpha \frac{ m_t }{ \sqrt{ v_t } + \epsilon }\end{split}\]It updates the weights using:
m = beta1*m + (1-beta1)*grad v = beta2*v + (1-beta2)*(grad**2) w += - learning_rate * m / (sqrt(v) + epsilon)
However, if grad’s storage type is
row_sparse
,lazy_update
is True and the storage type of weight is the same as those of m and v, only the row slices whose indices appear in grad.indices are updated (for w, m and v):for row in grad.indices: m[row] = beta1*m[row] + (1-beta1)*grad[row] v[row] = beta2*v[row] + (1-beta2)*(grad[row]**2) w[row] += - learning_rate * m[row] / (sqrt(v[row]) + epsilon)
Defined in src/operator/optimizer_op.cc:L687
- Parameters
weight (Symbol) – Weight
grad (Symbol) – Gradient
mean (Symbol) – Moving mean
var (Symbol) – Moving variance
lr (float, required) – Learning rate
beta1 (float, optional, default=0.899999976) – The decay rate for the 1st moment estimates.
beta2 (float, optional, default=0.999000013) – The decay rate for the 2nd moment estimates.
epsilon (float, optional, default=9.99999994e-09) – A small constant for numerical stability.
wd (float, optional, default=0) – Weight decay augments the objective function with a regularization term that penalizes large weights. The penalty scales with the square of the magnitude of each weight.
rescale_grad (float, optional, default=1) – Rescale gradient to grad = rescale_grad*grad.
clip_gradient (float, optional, default=-1) – Clip gradient to the range of [-clip_gradient, clip_gradient] If clip_gradient <= 0, gradient clipping is turned off. grad = max(min(grad, clip_gradient), -clip_gradient).
lazy_update (boolean, optional, default=1) – If true, lazy updates are applied if gradient’s stype is row_sparse and all of w, m and v have the same stype
name (string, optional.) – Name of the resulting symbol.
- Returns
The result symbol.
- Return type
-
mxnet.symbol.
add_n
(*args, **kwargs)¶ Adds all input arguments element-wise.
\[add\_n(a_1, a_2, ..., a_n) = a_1 + a_2 + ... + a_n\]add_n
is potentially more efficient than callingadd
by n times.The storage type of
add_n
output depends on storage types of inputsadd_n(row_sparse, row_sparse, ..) = row_sparse
add_n(default, csr, default) = default
add_n(any input combinations longer than 4 (>4) with at least one default type) = default
otherwise,
add_n
falls all inputs back to default storage and generates default storage
Defined in src/operator/tensor/elemwise_sum.cc:L155 This function support variable length of positional input.
-
mxnet.symbol.
all_finite
(data=None, init_output=_Null, name=None, attr=None, out=None, **kwargs)¶ Check if all the float numbers in the array are finite (used for AMP)
Defined in src/operator/contrib/all_finite.cc:L100
-
mxnet.symbol.
amp_cast
(data=None, dtype=_Null, name=None, attr=None, out=None, **kwargs)¶ Cast function between low precision float/FP32 used by AMP.
It casts only between low precision float/FP32 and does not do anything for other types.
Defined in src/operator/tensor/amp_cast.cc:L125
-
mxnet.symbol.
amp_multicast
(*data, **kwargs)¶ Cast function used by AMP, that casts its inputs to the common widest type.
It casts only between low precision float/FP32 and does not do anything for other types.
Defined in src/operator/tensor/amp_cast.cc:L169
- Parameters
data (Symbol[]) – Weights
num_outputs (int, required) – Number of input/output pairs to be casted to the widest type.
cast_narrow (boolean, optional, default=0) – Whether to cast to the narrowest type
name (string, optional.) – Name of the resulting symbol.
- Returns
The result symbol.
- Return type
-
mxnet.symbol.
arccos
(data=None, name=None, attr=None, out=None, **kwargs)¶ Returns element-wise inverse cosine of the input array.
The input should be in range [-1, 1]. The output is in the closed interval \([0, \pi]\)
\[arccos([-1, -.707, 0, .707, 1]) = [\pi, 3\pi/4, \pi/2, \pi/4, 0]\]The storage type of
arccos
output is always denseDefined in src/operator/tensor/elemwise_unary_op_trig.cc:L233
-
mxnet.symbol.
arccosh
(data=None, name=None, attr=None, out=None, **kwargs)¶ Returns the element-wise inverse hyperbolic cosine of the input array, computed element-wise.
The storage type of
arccosh
output is always denseDefined in src/operator/tensor/elemwise_unary_op_trig.cc:L535
-
mxnet.symbol.
arcsin
(data=None, name=None, attr=None, out=None, **kwargs)¶ Returns element-wise inverse sine of the input array.
The input should be in the range [-1, 1]. The output is in the closed interval of [\(-\pi/2\), \(\pi/2\)].
\[arcsin([-1, -.707, 0, .707, 1]) = [-\pi/2, -\pi/4, 0, \pi/4, \pi/2]\]The storage type of
arcsin
output depends upon the input storage type:arcsin(default) = default
arcsin(row_sparse) = row_sparse
arcsin(csr) = csr
Defined in src/operator/tensor/elemwise_unary_op_trig.cc:L187
-
mxnet.symbol.
arcsinh
(data=None, name=None, attr=None, out=None, **kwargs)¶ Returns the element-wise inverse hyperbolic sine of the input array, computed element-wise.
The storage type of
arcsinh
output depends upon the input storage type:arcsinh(default) = default
arcsinh(row_sparse) = row_sparse
arcsinh(csr) = csr
Defined in src/operator/tensor/elemwise_unary_op_trig.cc:L494
-
mxnet.symbol.
arctan
(data=None, name=None, attr=None, out=None, **kwargs)¶ Returns element-wise inverse tangent of the input array.
The output is in the closed interval \([-\pi/2, \pi/2]\)
\[arctan([-1, 0, 1]) = [-\pi/4, 0, \pi/4]\]The storage type of
arctan
output depends upon the input storage type:arctan(default) = default
arctan(row_sparse) = row_sparse
arctan(csr) = csr
Defined in src/operator/tensor/elemwise_unary_op_trig.cc:L282
-
mxnet.symbol.
arctanh
(data=None, name=None, attr=None, out=None, **kwargs)¶ Returns the element-wise inverse hyperbolic tangent of the input array, computed element-wise.
The storage type of
arctanh
output depends upon the input storage type:arctanh(default) = default
arctanh(row_sparse) = row_sparse
arctanh(csr) = csr
Defined in src/operator/tensor/elemwise_unary_op_trig.cc:L579
-
mxnet.symbol.
argmax
(data=None, axis=_Null, keepdims=_Null, name=None, attr=None, out=None, **kwargs)¶ Returns indices of the maximum values along an axis.
In the case of multiple occurrences of maximum values, the indices corresponding to the first occurrence are returned.
Examples:
x = [[ 0., 1., 2.], [ 3., 4., 5.]] // argmax along axis 0 argmax(x, axis=0) = [ 1., 1., 1.] // argmax along axis 1 argmax(x, axis=1) = [ 2., 2.] // argmax along axis 1 keeping same dims as an input array argmax(x, axis=1, keepdims=True) = [[ 2.], [ 2.]]
Defined in src/operator/tensor/broadcast_reduce_op_index.cc:L51
- Parameters
data (Symbol) – The input
axis (int or None, optional, default='None') – The axis along which to perform the reduction. Negative values means indexing from right to left.
Requires axis to be set as int, because global reduction is not supported yet.
keepdims (boolean, optional, default=0) – If this is set to True, the reduced axis is left in the result as dimension with size one.
name (string, optional.) – Name of the resulting symbol.
- Returns
The result symbol.
- Return type
-
mxnet.symbol.
argmax_channel
(data=None, name=None, attr=None, out=None, **kwargs)¶ Returns argmax indices of each channel from the input array.
The result will be an NDArray of shape (num_channel,).
In case of multiple occurrences of the maximum values, the indices corresponding to the first occurrence are returned.
Examples:
x = [[ 0., 1., 2.], [ 3., 4., 5.]] argmax_channel(x) = [ 2., 2.]
Defined in src/operator/tensor/broadcast_reduce_op_index.cc:L96
-
mxnet.symbol.
argmin
(data=None, axis=_Null, keepdims=_Null, name=None, attr=None, out=None, **kwargs)¶ Returns indices of the minimum values along an axis.
In the case of multiple occurrences of minimum values, the indices corresponding to the first occurrence are returned.
Examples:
x = [[ 0., 1., 2.], [ 3., 4., 5.]] // argmin along axis 0 argmin(x, axis=0) = [ 0., 0., 0.] // argmin along axis 1 argmin(x, axis=1) = [ 0., 0.] // argmin along axis 1 keeping same dims as an input array argmin(x, axis=1, keepdims=True) = [[ 0.], [ 0.]]
Defined in src/operator/tensor/broadcast_reduce_op_index.cc:L76
- Parameters
data (Symbol) – The input
axis (int or None, optional, default='None') – The axis along which to perform the reduction. Negative values means indexing from right to left.
Requires axis to be set as int, because global reduction is not supported yet.
keepdims (boolean, optional, default=0) – If this is set to True, the reduced axis is left in the result as dimension with size one.
name (string, optional.) – Name of the resulting symbol.
- Returns
The result symbol.
- Return type
-
mxnet.symbol.
argsort
(data=None, axis=_Null, is_ascend=_Null, dtype=_Null, name=None, attr=None, out=None, **kwargs)¶ Returns the indices that would sort an input array along the given axis.
This function performs sorting along the given axis and returns an array of indices having same shape as an input array that index data in sorted order.
Examples:
x = [[ 0.3, 0.2, 0.4], [ 0.1, 0.3, 0.2]] // sort along axis -1 argsort(x) = [[ 1., 0., 2.], [ 0., 2., 1.]] // sort along axis 0 argsort(x, axis=0) = [[ 1., 0., 1.] [ 0., 1., 0.]] // flatten and then sort argsort(x, axis=None) = [ 3., 1., 5., 0., 4., 2.]
Defined in src/operator/tensor/ordering_op.cc:L184
- Parameters
data (Symbol) – The input array
axis (int or None, optional, default='-1') – Axis along which to sort the input tensor. If not given, the flattened array is used. Default is -1.
is_ascend (boolean, optional, default=1) – Whether to sort in ascending or descending order.
dtype ({'float16', 'float32', 'float64', 'int32', 'int64', 'uint8'},optional, default='float32') – DType of the output indices. It is only valid when ret_typ is “indices” or “both”. An error will be raised if the selected data type cannot precisely represent the indices.
name (string, optional.) – Name of the resulting symbol.
- Returns
The result symbol.
- Return type
-
mxnet.symbol.
batch_dot
(lhs=None, rhs=None, transpose_a=_Null, transpose_b=_Null, forward_stype=_Null, name=None, attr=None, out=None, **kwargs)¶ Batchwise dot product.
batch_dot
is used to compute dot product ofx
andy
whenx
andy
are data in batch, namely N-D (N >= 3) arrays in shape of (B0, …, B_i, :, :).For example, given
x
with shape (B_0, …, B_i, N, M) andy
with shape (B_0, …, B_i, M, K), the result array will have shape (B_0, …, B_i, N, K), which is computed by:batch_dot(x,y)[b_0, ..., b_i, :, :] = dot(x[b_0, ..., b_i, :, :], y[b_0, ..., b_i, :, :])
Defined in src/operator/tensor/dot.cc:L127
- Parameters
lhs (Symbol) – The first input
rhs (Symbol) – The second input
transpose_a (boolean, optional, default=0) – If true then transpose the first input before dot.
transpose_b (boolean, optional, default=0) – If true then transpose the second input before dot.
forward_stype ({None, 'csr', 'default', 'row_sparse'},optional, default='None') – The desired storage type of the forward output given by user, if thecombination of input storage types and this hint does not matchany implemented ones, the dot operator will perform fallback operationand still produce an output of the desired storage type.
name (string, optional.) – Name of the resulting symbol.
- Returns
The result symbol.
- Return type
-
mxnet.symbol.
batch_take
(a=None, indices=None, name=None, attr=None, out=None, **kwargs)¶ Takes elements from a data batch.
Note
batch_take is deprecated. Use pick instead.
Given an input array of shape
(d0, d1)
and indices of shape(i0,)
, the result will be an output array of shape(i0,)
with:output[i] = input[i, indices[i]]
Examples:
x = [[ 1., 2.], [ 3., 4.], [ 5., 6.]] // takes elements with specified indices batch_take(x, [0,1,0]) = [ 1. 4. 5.]
Defined in src/operator/tensor/indexing_op.cc:L835
-
mxnet.symbol.
broadcast_add
(lhs=None, rhs=None, name=None, attr=None, out=None, **kwargs)¶ Returns element-wise sum of the input arrays with broadcasting.
broadcast_plus is an alias to the function broadcast_add.
Example:
x = [[ 1., 1., 1.], [ 1., 1., 1.]] y = [[ 0.], [ 1.]] broadcast_add(x, y) = [[ 1., 1., 1.], [ 2., 2., 2.]] broadcast_plus(x, y) = [[ 1., 1., 1.], [ 2., 2., 2.]]
Supported sparse operations:
broadcast_add(csr, dense(1D)) = dense broadcast_add(dense(1D), csr) = dense
Defined in src/operator/tensor/elemwise_binary_broadcast_op_basic.cc:L57
-
mxnet.symbol.
broadcast_axes
(data=None, axis=_Null, size=_Null, name=None, attr=None, out=None, **kwargs)¶ Broadcasts the input array over particular axes.
Broadcasting is allowed on axes with size 1, such as from (2,1,3,1) to (2,8,3,9). Elements will be duplicated on the broadcasted axes.
broadcast_axes is an alias to the function broadcast_axis.
Example:
// given x of shape (1,2,1) x = [[[ 1.], [ 2.]]] // broadcast x on on axis 2 broadcast_axis(x, axis=2, size=3) = [[[ 1., 1., 1.], [ 2., 2., 2.]]] // broadcast x on on axes 0 and 2 broadcast_axis(x, axis=(0,2), size=(2,3)) = [[[ 1., 1., 1.], [ 2., 2., 2.]], [[ 1., 1., 1.], [ 2., 2., 2.]]]
Defined in src/operator/tensor/broadcast_reduce_op_value.cc:L92
- Parameters
data (Symbol) – The input
axis (Shape(tuple), optional, default=[]) – The axes to perform the broadcasting.
size (Shape(tuple), optional, default=[]) – Target sizes of the broadcasting axes.
name (string, optional.) – Name of the resulting symbol.
- Returns
The result symbol.
- Return type
-
mxnet.symbol.
broadcast_axis
(data=None, axis=_Null, size=_Null, name=None, attr=None, out=None, **kwargs)¶ Broadcasts the input array over particular axes.
Broadcasting is allowed on axes with size 1, such as from (2,1,3,1) to (2,8,3,9). Elements will be duplicated on the broadcasted axes.
broadcast_axes is an alias to the function broadcast_axis.
Example:
// given x of shape (1,2,1) x = [[[ 1.], [ 2.]]] // broadcast x on on axis 2 broadcast_axis(x, axis=2, size=3) = [[[ 1., 1., 1.], [ 2., 2., 2.]]] // broadcast x on on axes 0 and 2 broadcast_axis(x, axis=(0,2), size=(2,3)) = [[[ 1., 1., 1.], [ 2., 2., 2.]], [[ 1., 1., 1.], [ 2., 2., 2.]]]
Defined in src/operator/tensor/broadcast_reduce_op_value.cc:L92
- Parameters
data (Symbol) – The input
axis (Shape(tuple), optional, default=[]) – The axes to perform the broadcasting.
size (Shape(tuple), optional, default=[]) – Target sizes of the broadcasting axes.
name (string, optional.) – Name of the resulting symbol.
- Returns
The result symbol.
- Return type
-
mxnet.symbol.
broadcast_div
(lhs=None, rhs=None, name=None, attr=None, out=None, **kwargs)¶ Returns element-wise division of the input arrays with broadcasting.
Example:
x = [[ 6., 6., 6.], [ 6., 6., 6.]] y = [[ 2.], [ 3.]] broadcast_div(x, y) = [[ 3., 3., 3.], [ 2., 2., 2.]]
Supported sparse operations:
broadcast_div(csr, dense(1D)) = csr
Defined in src/operator/tensor/elemwise_binary_broadcast_op_basic.cc:L186
-
mxnet.symbol.
broadcast_equal
(lhs=None, rhs=None, name=None, attr=None, out=None, **kwargs)¶ Returns the result of element-wise equal to (==) comparison operation with broadcasting.
Example:
x = [[ 1., 1., 1.], [ 1., 1., 1.]] y = [[ 0.], [ 1.]] broadcast_equal(x, y) = [[ 0., 0., 0.], [ 1., 1., 1.]]
Defined in src/operator/tensor/elemwise_binary_broadcast_op_logic.cc:L45
-
mxnet.symbol.
broadcast_greater
(lhs=None, rhs=None, name=None, attr=None, out=None, **kwargs)¶ Returns the result of element-wise greater than (>) comparison operation with broadcasting.
Example:
x = [[ 1., 1., 1.], [ 1., 1., 1.]] y = [[ 0.], [ 1.]] broadcast_greater(x, y) = [[ 1., 1., 1.], [ 0., 0., 0.]]
Defined in src/operator/tensor/elemwise_binary_broadcast_op_logic.cc:L81
-
mxnet.symbol.
broadcast_greater_equal
(lhs=None, rhs=None, name=None, attr=None, out=None, **kwargs)¶ Returns the result of element-wise greater than or equal to (>=) comparison operation with broadcasting.
Example:
x = [[ 1., 1., 1.], [ 1., 1., 1.]] y = [[ 0.], [ 1.]] broadcast_greater_equal(x, y) = [[ 1., 1., 1.], [ 1., 1., 1.]]
Defined in src/operator/tensor/elemwise_binary_broadcast_op_logic.cc:L99
-
mxnet.symbol.
broadcast_hypot
(lhs=None, rhs=None, name=None, attr=None, out=None, **kwargs)¶ Returns the hypotenuse of a right angled triangle, given its “legs” with broadcasting.
It is equivalent to doing \(sqrt(x_1^2 + x_2^2)\).
Example:
x = [[ 3., 3., 3.]] y = [[ 4.], [ 4.]] broadcast_hypot(x, y) = [[ 5., 5., 5.], [ 5., 5., 5.]] z = [[ 0.], [ 4.]] broadcast_hypot(x, z) = [[ 3., 3., 3.], [ 5., 5., 5.]]
Defined in src/operator/tensor/elemwise_binary_broadcast_op_extended.cc:L157
-
mxnet.symbol.
broadcast_lesser
(lhs=None, rhs=None, name=None, attr=None, out=None, **kwargs)¶ Returns the result of element-wise lesser than (<) comparison operation with broadcasting.
Example:
x = [[ 1., 1., 1.], [ 1., 1., 1.]] y = [[ 0.], [ 1.]] broadcast_lesser(x, y) = [[ 0., 0., 0.], [ 0., 0., 0.]]
Defined in src/operator/tensor/elemwise_binary_broadcast_op_logic.cc:L117
-
mxnet.symbol.
broadcast_lesser_equal
(lhs=None, rhs=None, name=None, attr=None, out=None, **kwargs)¶ Returns the result of element-wise lesser than or equal to (<=) comparison operation with broadcasting.
Example:
x = [[ 1., 1., 1.], [ 1., 1., 1.]] y = [[ 0.], [ 1.]] broadcast_lesser_equal(x, y) = [[ 0., 0., 0.], [ 1., 1., 1.]]
Defined in src/operator/tensor/elemwise_binary_broadcast_op_logic.cc:L135
-
mxnet.symbol.
broadcast_like
(lhs=None, rhs=None, lhs_axes=_Null, rhs_axes=_Null, name=None, attr=None, out=None, **kwargs)¶ Broadcasts lhs to have the same shape as rhs.
Broadcasting is a mechanism that allows NDArrays to perform arithmetic operations with arrays of different shapes efficiently without creating multiple copies of arrays. Also see, Broadcasting for more explanation.
Broadcasting is allowed on axes with size 1, such as from (2,1,3,1) to (2,8,3,9). Elements will be duplicated on the broadcasted axes.
For example:
broadcast_like([[1,2,3]], [[5,6,7],[7,8,9]]) = [[ 1., 2., 3.], [ 1., 2., 3.]]) broadcast_like([9], [1,2,3,4,5], lhs_axes=(0,), rhs_axes=(-1,)) = [9,9,9,9,9]
Defined in src/operator/tensor/broadcast_reduce_op_value.cc:L178
- Parameters
lhs (Symbol) – First input.
rhs (Symbol) – Second input.
lhs_axes (Shape or None, optional, default=None) – Axes to perform broadcast on in the first input array
rhs_axes (Shape or None, optional, default=None) – Axes to copy from the second input array
name (string, optional.) – Name of the resulting symbol.
- Returns
The result symbol.
- Return type
-
mxnet.symbol.
broadcast_logical_and
(lhs=None, rhs=None, name=None, attr=None, out=None, **kwargs)¶ Returns the result of element-wise logical and with broadcasting.
Example:
x = [[ 1., 1., 1.], [ 1., 1., 1.]] y = [[ 0.], [ 1.]] broadcast_logical_and(x, y) = [[ 0., 0., 0.], [ 1., 1., 1.]]
Defined in src/operator/tensor/elemwise_binary_broadcast_op_logic.cc:L153
-
mxnet.symbol.
broadcast_logical_or
(lhs=None, rhs=None, name=None, attr=None, out=None, **kwargs)¶ Returns the result of element-wise logical or with broadcasting.
Example:
x = [[ 1., 1., 0.], [ 1., 1., 0.]] y = [[ 1.], [ 0.]] broadcast_logical_or(x, y) = [[ 1., 1., 1.], [ 1., 1., 0.]]
Defined in src/operator/tensor/elemwise_binary_broadcast_op_logic.cc:L171
-
mxnet.symbol.
broadcast_logical_xor
(lhs=None, rhs=None, name=None, attr=None, out=None, **kwargs)¶ Returns the result of element-wise logical xor with broadcasting.
Example:
x = [[ 1., 1., 0.], [ 1., 1., 0.]] y = [[ 1.], [ 0.]] broadcast_logical_xor(x, y) = [[ 0., 0., 1.], [ 1., 1., 0.]]
Defined in src/operator/tensor/elemwise_binary_broadcast_op_logic.cc:L189
-
mxnet.symbol.
broadcast_maximum
(lhs=None, rhs=None, name=None, attr=None, out=None, **kwargs)¶ Returns element-wise maximum of the input arrays with broadcasting.
This function compares two input arrays and returns a new array having the element-wise maxima.
Example:
x = [[ 1., 1., 1.], [ 1., 1., 1.]] y = [[ 0.], [ 1.]] broadcast_maximum(x, y) = [[ 1., 1., 1.], [ 1., 1., 1.]]
Defined in src/operator/tensor/elemwise_binary_broadcast_op_extended.cc:L80
-
mxnet.symbol.
broadcast_minimum
(lhs=None, rhs=None, name=None, attr=None, out=None, **kwargs)¶ Returns element-wise minimum of the input arrays with broadcasting.
This function compares two input arrays and returns a new array having the element-wise minima.
Example:
x = [[ 1., 1., 1.], [ 1., 1., 1.]] y = [[ 0.], [ 1.]] broadcast_maximum(x, y) = [[ 0., 0., 0.], [ 1., 1., 1.]]
Defined in src/operator/tensor/elemwise_binary_broadcast_op_extended.cc:L116
-
mxnet.symbol.
broadcast_minus
(lhs=None, rhs=None, name=None, attr=None, out=None, **kwargs)¶ Returns element-wise difference of the input arrays with broadcasting.
broadcast_minus is an alias to the function broadcast_sub.
Example:
x = [[ 1., 1., 1.], [ 1., 1., 1.]] y = [[ 0.], [ 1.]] broadcast_sub(x, y) = [[ 1., 1., 1.], [ 0., 0., 0.]] broadcast_minus(x, y) = [[ 1., 1., 1.], [ 0., 0., 0.]]
Supported sparse operations:
broadcast_sub/minus(csr, dense(1D)) = dense broadcast_sub/minus(dense(1D), csr) = dense
Defined in src/operator/tensor/elemwise_binary_broadcast_op_basic.cc:L105
-
mxnet.symbol.
broadcast_mod
(lhs=None, rhs=None, name=None, attr=None, out=None, **kwargs)¶ Returns element-wise modulo of the input arrays with broadcasting.
Example:
x = [[ 8., 8., 8.], [ 8., 8., 8.]] y = [[ 2.], [ 3.]] broadcast_mod(x, y) = [[ 0., 0., 0.], [ 2., 2., 2.]]
Defined in src/operator/tensor/elemwise_binary_broadcast_op_basic.cc:L221
-
mxnet.symbol.
broadcast_mul
(lhs=None, rhs=None, name=None, attr=None, out=None, **kwargs)¶ Returns element-wise product of the input arrays with broadcasting.
Example:
x = [[ 1., 1., 1.], [ 1., 1., 1.]] y = [[ 0.], [ 1.]] broadcast_mul(x, y) = [[ 0., 0., 0.], [ 1., 1., 1.]]
Supported sparse operations:
broadcast_mul(csr, dense(1D)) = csr
Defined in src/operator/tensor/elemwise_binary_broadcast_op_basic.cc:L145
-
mxnet.symbol.
broadcast_not_equal
(lhs=None, rhs=None, name=None, attr=None, out=None, **kwargs)¶ Returns the result of element-wise not equal to (!=) comparison operation with broadcasting.
Example:
x = [[ 1., 1., 1.], [ 1., 1., 1.]] y = [[ 0.], [ 1.]] broadcast_not_equal(x, y) = [[ 1., 1., 1.], [ 0., 0., 0.]]
Defined in src/operator/tensor/elemwise_binary_broadcast_op_logic.cc:L63
-
mxnet.symbol.
broadcast_plus
(lhs=None, rhs=None, name=None, attr=None, out=None, **kwargs)¶ Returns element-wise sum of the input arrays with broadcasting.
broadcast_plus is an alias to the function broadcast_add.
Example:
x = [[ 1., 1., 1.], [ 1., 1., 1.]] y = [[ 0.], [ 1.]] broadcast_add(x, y) = [[ 1., 1., 1.], [ 2., 2., 2.]] broadcast_plus(x, y) = [[ 1., 1., 1.], [ 2., 2., 2.]]
Supported sparse operations:
broadcast_add(csr, dense(1D)) = dense broadcast_add(dense(1D), csr) = dense
Defined in src/operator/tensor/elemwise_binary_broadcast_op_basic.cc:L57
-
mxnet.symbol.
broadcast_power
(lhs=None, rhs=None, name=None, attr=None, out=None, **kwargs)¶ Returns result of first array elements raised to powers from second array, element-wise with broadcasting.
Example:
x = [[ 1., 1., 1.], [ 1., 1., 1.]] y = [[ 0.], [ 1.]] broadcast_power(x, y) = [[ 2., 2., 2.], [ 4., 4., 4.]]
Defined in src/operator/tensor/elemwise_binary_broadcast_op_extended.cc:L44
-
mxnet.symbol.
broadcast_sub
(lhs=None, rhs=None, name=None, attr=None, out=None, **kwargs)¶ Returns element-wise difference of the input arrays with broadcasting.
broadcast_minus is an alias to the function broadcast_sub.
Example:
x = [[ 1., 1., 1.], [ 1., 1., 1.]] y = [[ 0.], [ 1.]] broadcast_sub(x, y) = [[ 1., 1., 1.], [ 0., 0., 0.]] broadcast_minus(x, y) = [[ 1., 1., 1.], [ 0., 0., 0.]]
Supported sparse operations:
broadcast_sub/minus(csr, dense(1D)) = dense broadcast_sub/minus(dense(1D), csr) = dense
Defined in src/operator/tensor/elemwise_binary_broadcast_op_basic.cc:L105
-
mxnet.symbol.
broadcast_to
(data=None, shape=_Null, name=None, attr=None, out=None, **kwargs)¶ Broadcasts the input array to a new shape.
Broadcasting is a mechanism that allows NDArrays to perform arithmetic operations with arrays of different shapes efficiently without creating multiple copies of arrays. Also see, Broadcasting for more explanation.
Broadcasting is allowed on axes with size 1, such as from (2,1,3,1) to (2,8,3,9). Elements will be duplicated on the broadcasted axes.
For example:
broadcast_to([[1,2,3]], shape=(2,3)) = [[ 1., 2., 3.], [ 1., 2., 3.]])
The dimension which you do not want to change can also be kept as 0 which means copy the original value. So with shape=(2,0), we will obtain the same result as in the above example.
Defined in src/operator/tensor/broadcast_reduce_op_value.cc:L116
- Parameters
data (Symbol) – The input
shape (Shape(tuple), optional, default=[]) – The shape of the desired array. We can set the dim to zero if it’s same as the original. E.g A = broadcast_to(B, shape=(10, 0, 0)) has the same meaning as A = broadcast_axis(B, axis=0, size=10).
name (string, optional.) – Name of the resulting symbol.
- Returns
The result symbol.
- Return type
-
mxnet.symbol.
cast
(data=None, dtype=_Null, name=None, attr=None, out=None, **kwargs)¶ Casts all elements of the input to a new type.
Note
Cast
is deprecated. Usecast
instead.Example:
cast([0.9, 1.3], dtype='int32') = [0, 1] cast([1e20, 11.1], dtype='float16') = [inf, 11.09375] cast([300, 11.1, 10.9, -1, -3], dtype='uint8') = [44, 11, 10, 255, 253]
Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L664
-
mxnet.symbol.
cast_storage
(data=None, stype=_Null, name=None, attr=None, out=None, **kwargs)¶ Casts tensor storage type to the new type.
When an NDArray with default storage type is cast to csr or row_sparse storage, the result is compact, which means:
for csr, zero values will not be retained
for row_sparse, row slices of all zeros will not be retained
The storage type of
cast_storage
output depends on stype parameter:cast_storage(csr, ‘default’) = default
cast_storage(row_sparse, ‘default’) = default
cast_storage(default, ‘csr’) = csr
cast_storage(default, ‘row_sparse’) = row_sparse
cast_storage(csr, ‘csr’) = csr
cast_storage(row_sparse, ‘row_sparse’) = row_sparse
Example:
dense = [[ 0., 1., 0.], [ 2., 0., 3.], [ 0., 0., 0.], [ 0., 0., 0.]] # cast to row_sparse storage type rsp = cast_storage(dense, 'row_sparse') rsp.indices = [0, 1] rsp.values = [[ 0., 1., 0.], [ 2., 0., 3.]] # cast to csr storage type csr = cast_storage(dense, 'csr') csr.indices = [1, 0, 2] csr.values = [ 1., 2., 3.] csr.indptr = [0, 1, 3, 3, 3]
Defined in src/operator/tensor/cast_storage.cc:L71
-
mxnet.symbol.
cbrt
(data=None, name=None, attr=None, out=None, **kwargs)¶ Returns element-wise cube-root value of the input.
\[cbrt(x) = \sqrt[3]{x}\]Example:
cbrt([1, 8, -125]) = [1, 2, -5]
The storage type of
cbrt
output depends upon the input storage type:cbrt(default) = default
cbrt(row_sparse) = row_sparse
cbrt(csr) = csr
Defined in src/operator/tensor/elemwise_unary_op_pow.cc:L270
-
mxnet.symbol.
ceil
(data=None, name=None, attr=None, out=None, **kwargs)¶ Returns element-wise ceiling of the input.
The ceil of the scalar x is the smallest integer i, such that i >= x.
Example:
ceil([-2.1, -1.9, 1.5, 1.9, 2.1]) = [-2., -1., 2., 2., 3.]
The storage type of
ceil
output depends upon the input storage type:ceil(default) = default
ceil(row_sparse) = row_sparse
ceil(csr) = csr
Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L817
-
mxnet.symbol.
choose_element_0index
(data=None, index=None, axis=_Null, keepdims=_Null, mode=_Null, name=None, attr=None, out=None, **kwargs)¶ Picks elements from an input array according to the input indices along the given axis.
Given an input array of shape
(d0, d1)
and indices of shape(i0,)
, the result will be an output array of shape(i0,)
with:output[i] = input[i, indices[i]]
By default, if any index mentioned is too large, it is replaced by the index that addresses the last element along an axis (the clip mode).
This function supports n-dimensional input and (n-1)-dimensional indices arrays.
Examples:
x = [[ 1., 2.], [ 3., 4.], [ 5., 6.]] // picks elements with specified indices along axis 0 pick(x, y=[0,1], 0) = [ 1., 4.] // picks elements with specified indices along axis 1 pick(x, y=[0,1,0], 1) = [ 1., 4., 5.] // picks elements with specified indices along axis 1 using 'wrap' mode // to place indicies that would normally be out of bounds pick(x, y=[2,-1,-2], 1, mode='wrap') = [ 1., 4., 5.] y = [[ 1.], [ 0.], [ 2.]] // picks elements with specified indices along axis 1 and dims are maintained pick(x, y, 1, keepdims=True) = [[ 2.], [ 3.], [ 6.]]
Defined in src/operator/tensor/broadcast_reduce_op_index.cc:L150
- Parameters
data (Symbol) – The input array
index (Symbol) – The index array
axis (int or None, optional, default='-1') – int or None. The axis to picking the elements. Negative values means indexing from right to left. If is None, the elements in the index w.r.t the flattened input will be picked.
keepdims (boolean, optional, default=0) – If true, the axis where we pick the elements is left in the result as dimension with size one.
mode ({'clip', 'wrap'},optional, default='clip') – Specify how out-of-bound indices behave. Default is “clip”. “clip” means clip to the range. So, if all indices mentioned are too large, they are replaced by the index that addresses the last element along an axis. “wrap” means to wrap around.
name (string, optional.) – Name of the resulting symbol.
- Returns
The result symbol.
- Return type
-
mxnet.symbol.
clip
(data=None, a_min=_Null, a_max=_Null, name=None, attr=None, out=None, **kwargs)¶ Clips (limits) the values in an array. Given an interval, values outside the interval are clipped to the interval edges. Clipping
x
between a_min and a_max would be:: .. math:clip(x, a_min, a_max) = \max(\min(x, a_max), a_min))
- Example::
x = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] clip(x,1,8) = [ 1., 1., 2., 3., 4., 5., 6., 7., 8., 8.]
The storage type of
clip
output depends on storage types of inputs and the a_min, a_max parameter values:clip(default) = default
clip(row_sparse, a_min <= 0, a_max >= 0) = row_sparse
clip(csr, a_min <= 0, a_max >= 0) = csr
clip(row_sparse, a_min < 0, a_max < 0) = default
clip(row_sparse, a_min > 0, a_max > 0) = default
clip(csr, a_min < 0, a_max < 0) = csr
clip(csr, a_min > 0, a_max > 0) = csr
Defined in src/operator/tensor/matrix_op.cc:L676
-
mxnet.symbol.
col2im
(data=None, output_size=_Null, kernel=_Null, stride=_Null, dilate=_Null, pad=_Null, name=None, attr=None, out=None, **kwargs)¶ Combining the output column matrix of im2col back to image array.
Like
im2col
, this operator is also used in the vanilla convolution implementation. Despite the name, col2im is not the reverse operation of im2col. Since there may be overlaps between neighbouring sliding blocks, the column elements cannot be directly put back into image. Instead, they are accumulated (i.e., summed) in the input image just like the gradient computation, so col2im is the gradient of im2col and vice versa.Using the notation in im2col, given an input column array of shape \((N, C \times \prod(\text{kernel}), W)\), this operator accumulates the column elements into output array of shape \((N, C, \text{output_size}[0], \text{output_size}[1], \dots)\). Only 1-D, 2-D and 3-D of spatial dimension is supported in this operator.
Defined in src/operator/nn/im2col.cc:L181
- Parameters
data (Symbol) – Input array to combine sliding blocks.
output_size (Shape(tuple), required) – The spatial dimension of image array: (w,), (h, w) or (d, h, w).
kernel (Shape(tuple), required) – Sliding kernel size: (w,), (h, w) or (d, h, w).
stride (Shape(tuple), optional, default=[]) – The stride between adjacent sliding blocks in spatial dimension: (w,), (h, w) or (d, h, w). Defaults to 1 for each dimension.
dilate (Shape(tuple), optional, default=[]) – The spacing between adjacent kernel points: (w,), (h, w) or (d, h, w). Defaults to 1 for each dimension.
pad (Shape(tuple), optional, default=[]) – The zero-value padding size on both sides of spatial dimension: (w,), (h, w) or (d, h, w). Defaults to no padding.
name (string, optional.) – Name of the resulting symbol.
- Returns
The result symbol.
- Return type
-
mxnet.symbol.
concat
(*data, **kwargs)¶ Joins input arrays along a given axis.
Note
Concat is deprecated. Use concat instead.
The dimensions of the input arrays should be the same except the axis along which they will be concatenated. The dimension of the output array along the concatenated axis will be equal to the sum of the corresponding dimensions of the input arrays.
The storage type of
concat
output depends on storage types of inputsconcat(csr, csr, …, csr, dim=0) = csr
otherwise,
concat
generates output with default storage
Example:
x = [[1,1],[2,2]] y = [[3,3],[4,4],[5,5]] z = [[6,6], [7,7],[8,8]] concat(x,y,z,dim=0) = [[ 1., 1.], [ 2., 2.], [ 3., 3.], [ 4., 4.], [ 5., 5.], [ 6., 6.], [ 7., 7.], [ 8., 8.]] Note that you cannot concat x,y,z along dimension 1 since dimension 0 is not the same for all the input arrays. concat(y,z,dim=1) = [[ 3., 3., 6., 6.], [ 4., 4., 7., 7.], [ 5., 5., 8., 8.]]
Defined in src/operator/nn/concat.cc:L384 This function support variable length of positional input.
-
mxnet.symbol.
cos
(data=None, name=None, attr=None, out=None, **kwargs)¶ Computes the element-wise cosine of the input array.
The input should be in radians (\(2\pi\) rad equals 360 degrees).
\[cos([0, \pi/4, \pi/2]) = [1, 0.707, 0]\]The storage type of
cos
output is always denseDefined in src/operator/tensor/elemwise_unary_op_trig.cc:L90
-
mxnet.symbol.
cosh
(data=None, name=None, attr=None, out=None, **kwargs)¶ Returns the hyperbolic cosine of the input array, computed element-wise.
\[cosh(x) = 0.5\times(exp(x) + exp(-x))\]The storage type of
cosh
output is always denseDefined in src/operator/tensor/elemwise_unary_op_trig.cc:L409
-
mxnet.symbol.
crop
(data=None, begin=_Null, end=_Null, step=_Null, name=None, attr=None, out=None, **kwargs)¶ Slices a region of the array. .. note::
crop
is deprecated. Useslice
instead. This function returns a sliced array between the indices given by begin and end with the corresponding step. For an input array ofshape=(d_0, d_1, ..., d_n-1)
, slice operation withbegin=(b_0, b_1...b_m-1)
,end=(e_0, e_1, ..., e_m-1)
, andstep=(s_0, s_1, ..., s_m-1)
, where m <= n, results in an array with the shape(|e_0-b_0|/|s_0|, ..., |e_m-1-b_m-1|/|s_m-1|, d_m, ..., d_n-1)
. The resulting array’s k-th dimension contains elements from the k-th dimension of the input array starting from indexb_k
(inclusive) with steps_k
until reachinge_k
(exclusive). If the k-th elements are None in the sequence of begin, end, and step, the following rule will be used to set default values. If s_k is None, set s_k=1. If s_k > 0, set b_k=0, e_k=d_k; else, set b_k=d_k-1, e_k=-1. The storage type ofslice
output depends on storage types of inputs - slice(csr) = csr - otherwise,slice
generates output with default storage .. note:: When input data storage type is csr, it only supportsstep=(), or step=(None,), or step=(1,) to generate a csr output. For other step parameter values, it falls back to slicing a dense tensor.
- Example::
- x = [[ 1., 2., 3., 4.],
[ 5., 6., 7., 8.], [ 9., 10., 11., 12.]]
- slice(x, begin=(0,1), end=(2,4)) = [[ 2., 3., 4.],
[ 6., 7., 8.]]
- slice(x, begin=(None, 0), end=(None, 3), step=(-1, 2)) = [[9., 11.],
[5., 7.], [1., 3.]]
Defined in src/operator/tensor/matrix_op.cc:L481
- Parameters
data (Symbol) – Source input
begin (Shape(tuple), required) – starting indices for the slice operation, supports negative indices.
end (Shape(tuple), required) – ending indices for the slice operation, supports negative indices.
step (Shape(tuple), optional, default=[]) – step for the slice operation, supports negative values.
name (string, optional.) – Name of the resulting symbol.
- Returns
The result symbol.
- Return type
-
mxnet.symbol.
ctc_loss
(data=None, label=None, data_lengths=None, label_lengths=None, use_data_lengths=_Null, use_label_lengths=_Null, blank_label=_Null, name=None, attr=None, out=None, **kwargs)¶ Connectionist Temporal Classification Loss.
Note
The existing alias
contrib_CTCLoss
is deprecated.The shapes of the inputs and outputs:
data: (sequence_length, batch_size, alphabet_size)
label: (batch_size, label_sequence_length)
out: (batch_size)
The data tensor consists of sequences of activation vectors (without applying softmax), with i-th channel in the last dimension corresponding to i-th label for i between 0 and alphabet_size-1 (i.e always 0-indexed). Alphabet size should include one additional value reserved for blank label. When blank_label is
"first"
, the0
-th channel is be reserved for activation of blank label, or otherwise if it is “last”,(alphabet_size-1)
-th channel should be reserved for blank label.label
is an index matrix of integers. When blank_label is"first"
, the value 0 is then reserved for blank label, and should not be passed in this matrix. Otherwise, when blank_label is"last"
, the value (alphabet_size-1) is reserved for blank label.If a sequence of labels is shorter than label_sequence_length, use the special padding value at the end of the sequence to conform it to the correct length. The padding value is 0 when blank_label is
"first"
, and -1 otherwise.For example, suppose the vocabulary is [a, b, c], and in one batch we have three sequences ‘ba’, ‘cbb’, and ‘abac’. When blank_label is
"first"
, we can index the labels as {‘a’: 1, ‘b’: 2, ‘c’: 3}, and we reserve the 0-th channel for blank label in data tensor. The resulting label tensor should be padded to be:[[2, 1, 0, 0], [3, 2, 2, 0], [1, 2, 1, 3]]
When blank_label is
"last"
, we can index the labels as {‘a’: 0, ‘b’: 1, ‘c’: 2}, and we reserve the channel index 3 for blank label in data tensor. The resulting label tensor should be padded to be:[[1, 0, -1, -1], [2, 1, 1, -1], [0, 1, 0, 2]]
out
is a list of CTC loss values, one per example in the batch.See Connectionist Temporal Classification: Labelling Unsegmented Sequence Data with Recurrent Neural Networks, A. Graves et al. for more information on the definition and the algorithm.
Defined in src/operator/nn/ctc_loss.cc:L100
- Parameters
data (Symbol) – Input ndarray
label (Symbol) – Ground-truth labels for the loss.
data_lengths (Symbol) – Lengths of data for each of the samples. Only required when use_data_lengths is true.
label_lengths (Symbol) – Lengths of labels for each of the samples. Only required when use_label_lengths is true.
use_data_lengths (boolean, optional, default=0) – Whether the data lenghts are decided by data_lengths. If false, the lengths are equal to the max sequence length.
use_label_lengths (boolean, optional, default=0) – Whether the label lenghts are decided by label_lengths, or derived from padding_mask. If false, the lengths are derived from the first occurrence of the value of padding_mask. The value of padding_mask is
0
when first CTC label is reserved for blank, and-1
when last label is reserved for blank. See blank_label.blank_label ({'first', 'last'},optional, default='first') – Set the label that is reserved for blank label.If “first”, 0-th label is reserved, and label values for tokens in the vocabulary are between
1
andalphabet_size-1
, and the padding mask is-1
. If “last”, last label valuealphabet_size-1
is reserved for blank label instead, and label values for tokens in the vocabulary are between0
andalphabet_size-2
, and the padding mask is0
.name (string, optional.) – Name of the resulting symbol.
- Returns
The result symbol.
- Return type
-
mxnet.symbol.
cumsum
(a=None, axis=_Null, dtype=_Null, name=None, attr=None, out=None, **kwargs)¶ Return the cumulative sum of the elements along a given axis.
Defined in src/operator/numpy/np_cumsum.cc:L70
- Parameters
a (Symbol) – Input ndarray
axis (int or None, optional, default='None') – Axis along which the cumulative sum is computed. The default (None) is to compute the cumsum over the flattened array.
dtype ({None, 'float16', 'float32', 'float64', 'int32', 'int64', 'int8'},optional, default='None') – Type of the returned array and of the accumulator in which the elements are summed. If dtype is not specified, it defaults to the dtype of a, unless a has an integer dtype with a precision less than that of the default platform integer. In that case, the default platform integer is used.
name (string, optional.) – Name of the resulting symbol.
- Returns
The result symbol.
- Return type
-
mxnet.symbol.
degrees
(data=None, name=None, attr=None, out=None, **kwargs)¶ Converts each element of the input array from radians to degrees.
\[degrees([0, \pi/2, \pi, 3\pi/2, 2\pi]) = [0, 90, 180, 270, 360]\]The storage type of
degrees
output depends upon the input storage type:degrees(default) = default
degrees(row_sparse) = row_sparse
degrees(csr) = csr
Defined in src/operator/tensor/elemwise_unary_op_trig.cc:L332
-
mxnet.symbol.
depth_to_space
(data=None, block_size=_Null, name=None, attr=None, out=None, **kwargs)¶ Rearranges(permutes) data from depth into blocks of spatial data. Similar to ONNX DepthToSpace operator: https://github.com/onnx/onnx/blob/master/docs/Operators.md#DepthToSpace. The output is a new tensor where the values from depth dimension are moved in spatial blocks to height and width dimension. The reverse of this operation is
space_to_depth
. .. math:\begin{gather*} x \prime = reshape(x, [N, block\_size, block\_size, C / (block\_size ^ 2), H * block\_size, W * block\_size]) \\ x \prime \prime = transpose(x \prime, [0, 3, 4, 1, 5, 2]) \\ y = reshape(x \prime \prime, [N, C / (block\_size ^ 2), H * block\_size, W * block\_size]) \end{gather*}
where \(x\) is an input tensor with default layout as \([N, C, H, W]\): [batch, channels, height, width] and \(y\) is the output tensor of layout \([N, C / (block\_size ^ 2), H * block\_size, W * block\_size]\) Example:
x = [[[[0, 1, 2], [3, 4, 5]], [[6, 7, 8], [9, 10, 11]], [[12, 13, 14], [15, 16, 17]], [[18, 19, 20], [21, 22, 23]]]] depth_to_space(x, 2) = [[[[0, 6, 1, 7, 2, 8], [12, 18, 13, 19, 14, 20], [3, 9, 4, 10, 5, 11], [15, 21, 16, 22, 17, 23]]]]
Defined in src/operator/tensor/matrix_op.cc:L971
-
mxnet.symbol.
diag
(data=None, k=_Null, axis1=_Null, axis2=_Null, name=None, attr=None, out=None, **kwargs)¶ Extracts a diagonal or constructs a diagonal array.
diag
’s behavior depends on the input array dimensions:1-D arrays: constructs a 2-D array with the input as its diagonal, all other elements are zero.
N-D arrays: extracts the diagonals of the sub-arrays with axes specified by
axis1
andaxis2
. The output shape would be decided by removing the axes numberedaxis1
andaxis2
from the input shape and appending to the result a new axis with the size of the diagonals in question.For example, when the input shape is (2, 3, 4, 5),
axis1
andaxis2
are 0 and 2 respectively andk
is 0, the resulting shape would be (3, 5, 2).
Examples:
x = [[1, 2, 3], [4, 5, 6]] diag(x) = [1, 5] diag(x, k=1) = [2, 6] diag(x, k=-1) = [4] x = [1, 2, 3] diag(x) = [[1, 0, 0], [0, 2, 0], [0, 0, 3]] diag(x, k=1) = [[0, 1, 0], [0, 0, 2], [0, 0, 0]] diag(x, k=-1) = [[0, 0, 0], [1, 0, 0], [0, 2, 0]] x = [[[1, 2], [3, 4]], [[5, 6], [7, 8]]] diag(x) = [[1, 7], [2, 8]] diag(x, k=1) = [[3], [4]] diag(x, axis1=-2, axis2=-1) = [[1, 4], [5, 8]]
Defined in src/operator/tensor/diag_op.cc:L86
- Parameters
data (Symbol) – Input ndarray
k (int, optional, default='0') – Diagonal in question. The default is 0. Use k>0 for diagonals above the main diagonal, and k<0 for diagonals below the main diagonal. If input has shape (S0 S1) k must be between -S0 and S1
axis1 (int, optional, default='0') – The first axis of the sub-arrays of interest. Ignored when the input is a 1-D array.
axis2 (int, optional, default='1') – The second axis of the sub-arrays of interest. Ignored when the input is a 1-D array.
name (string, optional.) – Name of the resulting symbol.
- Returns
The result symbol.
- Return type
-
mxnet.symbol.
dot
(lhs=None, rhs=None, transpose_a=_Null, transpose_b=_Null, forward_stype=_Null, name=None, attr=None, out=None, **kwargs)¶ Dot product of two arrays.
dot
’s behavior depends on the input array dimensions:1-D arrays: inner product of vectors
2-D arrays: matrix multiplication
N-D arrays: a sum product over the last axis of the first input and the first axis of the second input
For example, given 3-D
x
with shape (n,m,k) andy
with shape (k,r,s), the result array will have shape (n,m,r,s). It is computed by:dot(x,y)[i,j,a,b] = sum(x[i,j,:]*y[:,a,b])
Example:
x = reshape([0,1,2,3,4,5,6,7], shape=(2,2,2)) y = reshape([7,6,5,4,3,2,1,0], shape=(2,2,2)) dot(x,y)[0,0,1,1] = 0 sum(x[0,0,:]*y[:,1,1]) = 0
The storage type of
dot
output depends on storage types of inputs, transpose option and forward_stype option for output storage type. Implemented sparse operations include:dot(default, default, transpose_a=True/False, transpose_b=True/False) = default
dot(csr, default, transpose_a=True) = default
dot(csr, default, transpose_a=True) = row_sparse
dot(csr, default) = default
dot(csr, row_sparse) = default
dot(default, csr) = csr (CPU only)
dot(default, csr, forward_stype=’default’) = default
dot(default, csr, transpose_b=True, forward_stype=’default’) = default
If the combination of input storage types and forward_stype does not match any of the above patterns,
dot
will fallback and generate output with default storage.Note
If the storage type of the lhs is “csr”, the storage type of gradient w.r.t rhs will be “row_sparse”. Only a subset of optimizers support sparse gradients, including SGD, AdaGrad and Adam. Note that by default lazy updates is turned on, which may perform differently from standard updates. For more details, please check the Optimization API at: https://mxnet.incubator.apache.org/api/python/optimization/optimization.html
Defined in src/operator/tensor/dot.cc:L77
- Parameters
lhs (Symbol) – The first input
rhs (Symbol) – The second input
transpose_a (boolean, optional, default=0) – If true then transpose the first input before dot.
transpose_b (boolean, optional, default=0) – If true then transpose the second input before dot.
forward_stype ({None, 'csr', 'default', 'row_sparse'},optional, default='None') – The desired storage type of the forward output given by user, if thecombination of input storage types and this hint does not matchany implemented ones, the dot operator will perform fallback operationand still produce an output of the desired storage type.
name (string, optional.) – Name of the resulting symbol.
- Returns
The result symbol.
- Return type
-
mxnet.symbol.
elemwise_add
(lhs=None, rhs=None, name=None, attr=None, out=None, **kwargs)¶ Adds arguments element-wise.
The storage type of
elemwise_add
output depends on storage types of inputselemwise_add(row_sparse, row_sparse) = row_sparse
elemwise_add(csr, csr) = csr
elemwise_add(default, csr) = default
elemwise_add(csr, default) = default
elemwise_add(default, rsp) = default
elemwise_add(rsp, default) = default
otherwise,
elemwise_add
generates output with default storage
-
mxnet.symbol.
elemwise_div
(lhs=None, rhs=None, name=None, attr=None, out=None, **kwargs)¶ Divides arguments element-wise.
The storage type of
elemwise_div
output is always dense
-
mxnet.symbol.
elemwise_mul
(lhs=None, rhs=None, name=None, attr=None, out=None, **kwargs)¶ Multiplies arguments element-wise.
The storage type of
elemwise_mul
output depends on storage types of inputselemwise_mul(default, default) = default
elemwise_mul(row_sparse, row_sparse) = row_sparse
elemwise_mul(default, row_sparse) = row_sparse
elemwise_mul(row_sparse, default) = row_sparse
elemwise_mul(csr, csr) = csr
otherwise,
elemwise_mul
generates output with default storage
-
mxnet.symbol.
elemwise_sub
(lhs=None, rhs=None, name=None, attr=None, out=None, **kwargs)¶ Subtracts arguments element-wise.
The storage type of
elemwise_sub
output depends on storage types of inputselemwise_sub(row_sparse, row_sparse) = row_sparse
elemwise_sub(csr, csr) = csr
elemwise_sub(default, csr) = default
elemwise_sub(csr, default) = default
elemwise_sub(default, rsp) = default
elemwise_sub(rsp, default) = default
otherwise,
elemwise_sub
generates output with default storage
-
mxnet.symbol.
erf
(data=None, name=None, attr=None, out=None, **kwargs)¶ Returns element-wise gauss error function of the input.
Example:
erf([0, -1., 10.]) = [0., -0.8427, 1.]
Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L886
-
mxnet.symbol.
erfinv
(data=None, name=None, attr=None, out=None, **kwargs)¶ Returns element-wise inverse gauss error function of the input.
Example:
erfinv([0, 0.5., -1.]) = [0., 0.4769, -inf]
Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L908
-
mxnet.symbol.
exp
(data=None, name=None, attr=None, out=None, **kwargs)¶ Returns element-wise exponential value of the input.
\[exp(x) = e^x \approx 2.718^x\]Example:
exp([0, 1, 2]) = [1., 2.71828175, 7.38905621]
The storage type of
exp
output is always denseDefined in src/operator/tensor/elemwise_unary_op_logexp.cc:L64
-
mxnet.symbol.
expand_dims
(data=None, axis=_Null, name=None, attr=None, out=None, **kwargs)¶ Inserts a new axis of size 1 into the array shape For example, given
x
with shape(2,3,4)
, thenexpand_dims(x, axis=1)
will return a new array with shape(2,1,3,4)
.Defined in src/operator/tensor/matrix_op.cc:L394
- Parameters
data (Symbol) – Source input
axis (int, required) – Position where new axis is to be inserted. Suppose that the input NDArray’s dimension is ndim, the range of the inserted axis is [-ndim, ndim]
name (string, optional.) – Name of the resulting symbol.
- Returns
The result symbol.
- Return type
-
mxnet.symbol.
expm1
(data=None, name=None, attr=None, out=None, **kwargs)¶ Returns
exp(x) - 1
computed element-wise on the input.This function provides greater precision than
exp(x) - 1
for small values ofx
.The storage type of
expm1
output depends upon the input storage type:expm1(default) = default
expm1(row_sparse) = row_sparse
expm1(csr) = csr
Defined in src/operator/tensor/elemwise_unary_op_logexp.cc:L244
-
mxnet.symbol.
fill_element_0index
(lhs=None, mhs=None, rhs=None, name=None, attr=None, out=None, **kwargs)¶ Fill one element of each line(row for python, column for R/Julia) in lhs according to index indicated by rhs and values indicated by mhs. This function assume rhs uses 0-based index.
-
mxnet.symbol.
fix
(data=None, name=None, attr=None, out=None, **kwargs)¶ Returns element-wise rounded value to the nearest integer towards zero of the input.
Example:
fix([-2.1, -1.9, 1.9, 2.1]) = [-2., -1., 1., 2.]
The storage type of
fix
output depends upon the input storage type:fix(default) = default
fix(row_sparse) = row_sparse
fix(csr) = csr
Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L874
-
mxnet.symbol.
flatten
(data=None, name=None, attr=None, out=None, **kwargs)¶ Flattens the input array into a 2-D array by collapsing the higher dimensions. .. note:: Flatten is deprecated. Use flatten instead. For an input array with shape
(d1, d2, ..., dk)
, flatten operation reshapes the input array into an output array of shape(d1, d2*...*dk)
. Note that the behavior of this function is different from numpy.ndarray.flatten, which behaves similar to mxnet.ndarray.reshape((-1,)). Example:x = [[ [1,2,3], [4,5,6], [7,8,9] ], [ [1,2,3], [4,5,6], [7,8,9] ]], flatten(x) = [[ 1., 2., 3., 4., 5., 6., 7., 8., 9.], [ 1., 2., 3., 4., 5., 6., 7., 8., 9.]]
Defined in src/operator/tensor/matrix_op.cc:L249
-
mxnet.symbol.
flip
(data=None, axis=_Null, name=None, attr=None, out=None, **kwargs)¶ Reverses the order of elements along given axis while preserving array shape. Note: reverse and flip are equivalent. We use reverse in the following examples. Examples:
x = [[ 0., 1., 2., 3., 4.], [ 5., 6., 7., 8., 9.]] reverse(x, axis=0) = [[ 5., 6., 7., 8., 9.], [ 0., 1., 2., 3., 4.]] reverse(x, axis=1) = [[ 4., 3., 2., 1., 0.], [ 9., 8., 7., 6., 5.]]
Defined in src/operator/tensor/matrix_op.cc:L831
-
mxnet.symbol.
floor
(data=None, name=None, attr=None, out=None, **kwargs)¶ Returns element-wise floor of the input.
The floor of the scalar x is the largest integer i, such that i <= x.
Example:
floor([-2.1, -1.9, 1.5, 1.9, 2.1]) = [-3., -2., 1., 1., 2.]
The storage type of
floor
output depends upon the input storage type:floor(default) = default
floor(row_sparse) = row_sparse
floor(csr) = csr
Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L836
-
mxnet.symbol.
ftml_update
(weight=None, grad=None, d=None, v=None, z=None, lr=_Null, beta1=_Null, beta2=_Null, epsilon=_Null, t=_Null, wd=_Null, rescale_grad=_Null, clip_grad=_Null, name=None, attr=None, out=None, **kwargs)¶ The FTML optimizer described in FTML - Follow the Moving Leader in Deep Learning, available at http://proceedings.mlr.press/v70/zheng17a/zheng17a.pdf.
\[\begin{split}g_t = \nabla J(W_{t-1})\\ v_t = \beta_2 v_{t-1} + (1 - \beta_2) g_t^2\\ d_t = \frac{ 1 - \beta_1^t }{ \eta_t } (\sqrt{ \frac{ v_t }{ 1 - \beta_2^t } } + \epsilon) \sigma_t = d_t - \beta_1 d_{t-1} z_t = \beta_1 z_{ t-1 } + (1 - \beta_1^t) g_t - \sigma_t W_{t-1} W_t = - \frac{ z_t }{ d_t }\end{split}\]Defined in src/operator/optimizer_op.cc:L639
- Parameters
weight (Symbol) – Weight
grad (Symbol) – Gradient
d (Symbol) – Internal state
d_t
v (Symbol) – Internal state
v_t
z (Symbol) – Internal state
z_t
lr (float, required) – Learning rate.
beta1 (float, optional, default=0.600000024) – Generally close to 0.5.
beta2 (float, optional, default=0.999000013) – Generally close to 1.
epsilon (double, optional, default=9.9999999392252903e-09) – Epsilon to prevent div 0.
t (int, required) – Number of update.
wd (float, optional, default=0) – Weight decay augments the objective function with a regularization term that penalizes large weights. The penalty scales with the square of the magnitude of each weight.
rescale_grad (float, optional, default=1) – Rescale gradient to grad = rescale_grad*grad.
clip_grad (float, optional, default=-1) – Clip gradient to the range of [-clip_gradient, clip_gradient] If clip_gradient <= 0, gradient clipping is turned off. grad = max(min(grad, clip_gradient), -clip_gradient).
name (string, optional.) – Name of the resulting symbol.
- Returns
The result symbol.
- Return type
-
mxnet.symbol.
ftrl_update
(weight=None, grad=None, z=None, n=None, lr=_Null, lamda1=_Null, beta=_Null, wd=_Null, rescale_grad=_Null, clip_gradient=_Null, name=None, attr=None, out=None, **kwargs)¶ Update function for Ftrl optimizer. Referenced from Ad Click Prediction: a View from the Trenches, available at http://dl.acm.org/citation.cfm?id=2488200.
It updates the weights using:
rescaled_grad = clip(grad * rescale_grad, clip_gradient) z += rescaled_grad - (sqrt(n + rescaled_grad**2) - sqrt(n)) * weight / learning_rate n += rescaled_grad**2 w = (sign(z) * lamda1 - z) / ((beta + sqrt(n)) / learning_rate + wd) * (abs(z) > lamda1)
If w, z and n are all of
row_sparse
storage type, only the row slices whose indices appear in grad.indices are updated (for w, z and n):for row in grad.indices: rescaled_grad[row] = clip(grad[row] * rescale_grad, clip_gradient) z[row] += rescaled_grad[row] - (sqrt(n[row] + rescaled_grad[row]**2) - sqrt(n[row])) * weight[row] / learning_rate n[row] += rescaled_grad[row]**2 w[row] = (sign(z[row]) * lamda1 - z[row]) / ((beta + sqrt(n[row])) / learning_rate + wd) * (abs(z[row]) > lamda1)
Defined in src/operator/optimizer_op.cc:L875
- Parameters
weight (Symbol) – Weight
grad (Symbol) – Gradient
z (Symbol) – z
n (Symbol) – Square of grad
lr (float, required) – Learning rate
lamda1 (float, optional, default=0.00999999978) – The L1 regularization coefficient.
beta (float, optional, default=1) – Per-Coordinate Learning Rate beta.
wd (float, optional, default=0) – Weight decay augments the objective function with a regularization term that penalizes large weights. The penalty scales with the square of the magnitude of each weight.
rescale_grad (float, optional, default=1) – Rescale gradient to grad = rescale_grad*grad.
clip_gradient (float, optional, default=-1) – Clip gradient to the range of [-clip_gradient, clip_gradient] If clip_gradient <= 0, gradient clipping is turned off. grad = max(min(grad, clip_gradient), -clip_gradient).
name (string, optional.) – Name of the resulting symbol.
- Returns
The result symbol.
- Return type
-
mxnet.symbol.
gamma
(data=None, name=None, attr=None, out=None, **kwargs)¶ Returns the gamma function (extension of the factorial function to the reals), computed element-wise on the input array.
The storage type of
gamma
output is always dense
-
mxnet.symbol.
gammaln
(data=None, name=None, attr=None, out=None, **kwargs)¶ Returns element-wise log of the absolute value of the gamma function of the input.
The storage type of
gammaln
output is always dense
-
mxnet.symbol.
gather_nd
(data=None, indices=None, name=None, attr=None, out=None, **kwargs)¶ Gather elements or slices from data and store to a tensor whose shape is defined by indices.
Given data with shape (X_0, X_1, …, X_{N-1}) and indices with shape (M, Y_0, …, Y_{K-1}), the output will have shape (Y_0, …, Y_{K-1}, X_M, …, X_{N-1}), where M <= N. If M == N, output shape will simply be (Y_0, …, Y_{K-1}).
The elements in output is defined as follows:
output[y_0, ..., y_{K-1}, x_M, ..., x_{N-1}] = data[indices[0, y_0, ..., y_{K-1}], ..., indices[M-1, y_0, ..., y_{K-1}], x_M, ..., x_{N-1}]
Examples:
data = [[0, 1], [2, 3]] indices = [[1, 1, 0], [0, 1, 0]] gather_nd(data, indices) = [2, 3, 0] data = [[[1, 2], [3, 4]], [[5, 6], [7, 8]]] indices = [[0, 1], [1, 0]] gather_nd(data, indices) = [[3, 4], [5, 6]]
-
mxnet.symbol.
hard_sigmoid
(data=None, alpha=_Null, beta=_Null, name=None, attr=None, out=None, **kwargs)¶ Computes hard sigmoid of x element-wise.
\[y = max(0, min(1, alpha * x + beta))\]Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L161
-
mxnet.symbol.
identity
(data=None, name=None, attr=None, out=None, **kwargs)¶ Returns a copy of the input.
From:src/operator/tensor/elemwise_unary_op_basic.cc:244
-
mxnet.symbol.
im2col
(data=None, kernel=_Null, stride=_Null, dilate=_Null, pad=_Null, name=None, attr=None, out=None, **kwargs)¶ Extract sliding blocks from input array.
This operator is used in vanilla convolution implementation to transform the sliding blocks on image to column matrix, then the convolution operation can be computed by matrix multiplication between column and convolution weight. Due to the close relation between im2col and convolution, the concept of kernel, stride, dilate and pad in this operator are inherited from convolution operation.
Given the input data of shape \((N, C, *)\), where \(N\) is the batch size, \(C\) is the channel size, and \(*\) is the arbitrary spatial dimension, the output column array is always with shape \((N, C \times \prod(\text{kernel}), W)\), where \(C \times \prod(\text{kernel})\) is the block size, and \(W\) is the block number which is the spatial size of the convolution output with same input parameters. Only 1-D, 2-D and 3-D of spatial dimension is supported in this operator.
Defined in src/operator/nn/im2col.cc:L99
- Parameters
data (Symbol) – Input array to extract sliding blocks.
kernel (Shape(tuple), required) – Sliding kernel size: (w,), (h, w) or (d, h, w).
stride (Shape(tuple), optional, default=[]) – The stride between adjacent sliding blocks in spatial dimension: (w,), (h, w) or (d, h, w). Defaults to 1 for each dimension.
dilate (Shape(tuple), optional, default=[]) – The spacing between adjacent kernel points: (w,), (h, w) or (d, h, w). Defaults to 1 for each dimension.
pad (Shape(tuple), optional, default=[]) – The zero-value padding size on both sides of spatial dimension: (w,), (h, w) or (d, h, w). Defaults to no padding.
name (string, optional.) – Name of the resulting symbol.
- Returns
The result symbol.
- Return type
-
mxnet.symbol.
khatri_rao
(*args, **kwargs)¶ Computes the Khatri-Rao product of the input matrices.
Given a collection of \(n\) input matrices,
\[A_1 \in \mathbb{R}^{M_1 \times M}, \ldots, A_n \in \mathbb{R}^{M_n \times N},\]the (column-wise) Khatri-Rao product is defined as the matrix,
\[X = A_1 \otimes \cdots \otimes A_n \in \mathbb{R}^{(M_1 \cdots M_n) \times N},\]where the \(k\) th column is equal to the column-wise outer product \({A_1}_k \otimes \cdots \otimes {A_n}_k\) where \({A_i}_k\) is the kth column of the ith matrix.
Example:
>>> A = mx.nd.array([[1, -1], >>> [2, -3]]) >>> B = mx.nd.array([[1, 4], >>> [2, 5], >>> [3, 6]]) >>> C = mx.nd.khatri_rao(A, B) >>> print(C.asnumpy()) [[ 1. -4.] [ 2. -5.] [ 3. -6.] [ 2. -12.] [ 4. -15.] [ 6. -18.]]
Defined in src/operator/contrib/krprod.cc:L108 This function support variable length of positional input.
-
mxnet.symbol.
lamb_update_phase1
(weight=None, grad=None, mean=None, var=None, beta1=_Null, beta2=_Null, epsilon=_Null, t=_Null, bias_correction=_Null, wd=_Null, rescale_grad=_Null, clip_gradient=_Null, name=None, attr=None, out=None, **kwargs)¶ Phase I of lamb update it performs the following operations and returns g:.
Link to paper: https://arxiv.org/pdf/1904.00962.pdf
\[ \begin{align}\begin{aligned}\begin{gather*} grad = grad * rescale_grad if (grad < -clip_gradient) then grad = -clip_gradient if (grad > clip_gradient) then grad = clip_gradient\\mean = beta1 * mean + (1 - beta1) * grad; variance = beta2 * variance + (1. - beta2) * grad ^ 2;\\if (bias_correction) then mean_hat = mean / (1. - beta1^t); var_hat = var / (1 - beta2^t); g = mean_hat / (var_hat^(1/2) + epsilon) + wd * weight; else g = mean / (var_data^(1/2) + epsilon) + wd * weight; \end{gather*}\end{aligned}\end{align} \]Defined in src/operator/optimizer_op.cc:L952
- Parameters
weight (Symbol) – Weight
grad (Symbol) – Gradient
mean (Symbol) – Moving mean
var (Symbol) – Moving variance
beta1 (float, optional, default=0.899999976) – The decay rate for the 1st moment estimates.
beta2 (float, optional, default=0.999000013) – The decay rate for the 2nd moment estimates.
epsilon (float, optional, default=9.99999997e-07) – A small constant for numerical stability.
t (int, required) – Index update count.
bias_correction (boolean, optional, default=1) – Whether to use bias correction.
wd (float, required) – Weight decay augments the objective function with a regularization term that penalizes large weights. The penalty scales with the square of the magnitude of each weight.
rescale_grad (float, optional, default=1) – Rescale gradient to grad = rescale_grad*grad.
clip_gradient (float, optional, default=-1) – Clip gradient to the range of [-clip_gradient, clip_gradient] If clip_gradient <= 0, gradient clipping is turned off. grad = max(min(grad, clip_gradient), -clip_gradient).
name (string, optional.) – Name of the resulting symbol.
- Returns
The result symbol.
- Return type
-
mxnet.symbol.
lamb_update_phase2
(weight=None, g=None, r1=None, r2=None, lr=_Null, lower_bound=_Null, upper_bound=_Null, name=None, attr=None, out=None, **kwargs)¶ Phase II of lamb update it performs the following operations and updates grad.
Link to paper: https://arxiv.org/pdf/1904.00962.pdf
\[ \begin{align}\begin{aligned}\begin{gather*} if (lower_bound >= 0) then r1 = max(r1, lower_bound) if (upper_bound >= 0) then r1 = max(r1, upper_bound)\\if (r1 == 0 or r2 == 0) then lr = lr else lr = lr * (r1/r2) weight = weight - lr * g \end{gather*}\end{aligned}\end{align} \]Defined in src/operator/optimizer_op.cc:L991
- Parameters
weight (Symbol) – Weight
g (Symbol) – Output of lamb_update_phase 1
r1 (Symbol) – r1
r2 (Symbol) – r2
lr (float, required) – Learning rate
lower_bound (float, optional, default=-1) – Lower limit of norm of weight. If lower_bound <= 0, Lower limit is not set
upper_bound (float, optional, default=-1) – Upper limit of norm of weight. If upper_bound <= 0, Upper limit is not set
name (string, optional.) – Name of the resulting symbol.
- Returns
The result symbol.
- Return type
-
mxnet.symbol.
linalg_det
(A=None, name=None, attr=None, out=None, **kwargs)¶ Compute the determinant of a matrix. Input is a tensor A of dimension n >= 2.
If n=2, A is a square matrix. We compute:
out = det(A)
If n>2, det is performed separately on the trailing two dimensions for all inputs (batch mode).
Note
The operator supports float32 and float64 data types only.
Note
There is no gradient backwarded when A is non-invertible (which is equivalent to det(A) = 0) because zero is rarely hit upon in float point computation and the Jacobi’s formula on determinant gradient is not computationally efficient when A is non-invertible.
Examples:
Single matrix determinant A = [[1., 4.], [2., 3.]] det(A) = [-5.] Batch matrix determinant A = [[[1., 4.], [2., 3.]], [[2., 3.], [1., 4.]]] det(A) = [-5., 5.]
Defined in src/operator/tensor/la_op.cc:L974
-
mxnet.symbol.
linalg_extractdiag
(A=None, offset=_Null, name=None, attr=None, out=None, **kwargs)¶ Extracts the diagonal entries of a square matrix. Input is a tensor A of dimension n >= 2.
If n=2, then A represents a single square matrix which diagonal elements get extracted as a 1-dimensional tensor.
If n>2, then A represents a batch of square matrices on the trailing two dimensions. The extracted diagonals are returned as an n-1-dimensional tensor.
Note
The operator supports float32 and float64 data types only.
Examples:
Single matrix diagonal extraction A = [[1.0, 2.0], [3.0, 4.0]] extractdiag(A) = [1.0, 4.0] extractdiag(A, 1) = [2.0] Batch matrix diagonal extraction A = [[[1.0, 2.0], [3.0, 4.0]], [[5.0, 6.0], [7.0, 8.0]]] extractdiag(A) = [[1.0, 4.0], [5.0, 8.0]]
Defined in src/operator/tensor/la_op.cc:L494
- Parameters
A (Symbol) – Tensor of square matrices
offset (int, optional, default='0') – Offset of the diagonal versus the main diagonal. 0 corresponds to the main diagonal, a negative/positive value to diagonals below/above the main diagonal.
name (string, optional.) – Name of the resulting symbol.
- Returns
The result symbol.
- Return type
-
mxnet.symbol.
linalg_extracttrian
(A=None, offset=_Null, lower=_Null, name=None, attr=None, out=None, **kwargs)¶ Extracts a triangular sub-matrix from a square matrix. Input is a tensor A of dimension n >= 2.
If n=2, then A represents a single square matrix from which a triangular sub-matrix is extracted as a 1-dimensional tensor.
If n>2, then A represents a batch of square matrices on the trailing two dimensions. The extracted triangular sub-matrices are returned as an n-1-dimensional tensor.
The offset and lower parameters determine the triangle to be extracted:
When offset = 0 either the lower or upper triangle with respect to the main diagonal is extracted depending on the value of parameter lower.
When offset = k > 0 the upper triangle with respect to the k-th diagonal above the main diagonal is extracted.
When offset = k < 0 the lower triangle with respect to the k-th diagonal below the main diagonal is extracted.
Note
The operator supports float32 and float64 data types only.
Examples:
Single triagonal extraction A = [[1.0, 2.0], [3.0, 4.0]] extracttrian(A) = [1.0, 3.0, 4.0] extracttrian(A, lower=False) = [1.0, 2.0, 4.0] extracttrian(A, 1) = [2.0] extracttrian(A, -1) = [3.0] Batch triagonal extraction A = [[[1.0, 2.0], [3.0, 4.0]], [[5.0, 6.0], [7.0, 8.0]]] extracttrian(A) = [[1.0, 3.0, 4.0], [5.0, 7.0, 8.0]]
Defined in src/operator/tensor/la_op.cc:L604
- Parameters
A (Symbol) – Tensor of square matrices
offset (int, optional, default='0') – Offset of the diagonal versus the main diagonal. 0 corresponds to the main diagonal, a negative/positive value to diagonals below/above the main diagonal.
lower (boolean, optional, default=1) – Refer to the lower triangular matrix if lower=true, refer to the upper otherwise. Only relevant when offset=0
name (string, optional.) – Name of the resulting symbol.
- Returns
The result symbol.
- Return type
-
mxnet.symbol.
linalg_gelqf
(A=None, name=None, attr=None, out=None, **kwargs)¶ LQ factorization for general matrix. Input is a tensor A of dimension n >= 2.
If n=2, we compute the LQ factorization (LAPACK gelqf, followed by orglq). A must have shape (x, y) with x <= y, and must have full rank =x. The LQ factorization consists of L with shape (x, x) and Q with shape (x, y), so that:
A = L * Q
Here, L is lower triangular (upper triangle equal to zero) with nonzero diagonal, and Q is row-orthonormal, meaning that
Q * QT
is equal to the identity matrix of shape (x, x).
If n>2, gelqf is performed separately on the trailing two dimensions for all inputs (batch mode).
Note
The operator supports float32 and float64 data types only.
Examples:
Single LQ factorization A = [[1., 2., 3.], [4., 5., 6.]] Q, L = gelqf(A) Q = [[-0.26726124, -0.53452248, -0.80178373], [0.87287156, 0.21821789, -0.43643578]] L = [[-3.74165739, 0.], [-8.55235974, 1.96396101]] Batch LQ factorization A = [[[1., 2., 3.], [4., 5., 6.]], [[7., 8., 9.], [10., 11., 12.]]] Q, L = gelqf(A) Q = [[[-0.26726124, -0.53452248, -0.80178373], [0.87287156, 0.21821789, -0.43643578]], [[-0.50257071, -0.57436653, -0.64616234], [0.7620735, 0.05862104, -0.64483142]]] L = [[[-3.74165739, 0.], [-8.55235974, 1.96396101]], [[-13.92838828, 0.], [-19.09768702, 0.52758934]]]
Defined in src/operator/tensor/la_op.cc:L797
-
mxnet.symbol.
linalg_gemm
(A=None, B=None, C=None, transpose_a=_Null, transpose_b=_Null, alpha=_Null, beta=_Null, axis=_Null, name=None, attr=None, out=None, **kwargs)¶ Performs general matrix multiplication and accumulation. Input are tensors A, B, C, each of dimension n >= 2 and having the same shape on the leading n-2 dimensions.
If n=2, the BLAS3 function gemm is performed:
out = alpha * op(A) * op(B) + beta * C
Here, alpha and beta are scalar parameters, and op() is either the identity or matrix transposition (depending on transpose_a, transpose_b).
If n>2, gemm is performed separately for a batch of matrices. The column indices of the matrices are given by the last dimensions of the tensors, the row indices by the axis specified with the axis parameter. By default, the trailing two dimensions will be used for matrix encoding.
For a non-default axis parameter, the operation performed is equivalent to a series of swapaxes/gemm/swapaxes calls. For example let A, B, C be 5 dimensional tensors. Then gemm(A, B, C, axis=1) is equivalent to the following without the overhead of the additional swapaxis operations:
A1 = swapaxes(A, dim1=1, dim2=3) B1 = swapaxes(B, dim1=1, dim2=3) C = swapaxes(C, dim1=1, dim2=3) C = gemm(A1, B1, C) C = swapaxis(C, dim1=1, dim2=3)
When the input data is of type float32 and the environment variables MXNET_CUDA_ALLOW_TENSOR_CORE and MXNET_CUDA_TENSOR_OP_MATH_ALLOW_CONVERSION are set to 1, this operator will try to use pseudo-float16 precision (float32 math with float16 I/O) precision in order to use Tensor Cores on suitable NVIDIA GPUs. This can sometimes give significant speedups.
Note
The operator supports float32 and float64 data types only.
Examples:
Single matrix multiply-add A = [[1.0, 1.0], [1.0, 1.0]] B = [[1.0, 1.0], [1.0, 1.0], [1.0, 1.0]] C = [[1.0, 1.0, 1.0], [1.0, 1.0, 1.0]] gemm(A, B, C, transpose_b=True, alpha=2.0, beta=10.0) = [[14.0, 14.0, 14.0], [14.0, 14.0, 14.0]] Batch matrix multiply-add A = [[[1.0, 1.0]], [[0.1, 0.1]]] B = [[[1.0, 1.0]], [[0.1, 0.1]]] C = [[[10.0]], [[0.01]]] gemm(A, B, C, transpose_b=True, alpha=2.0 , beta=10.0) = [[[104.0]], [[0.14]]]
Defined in src/operator/tensor/la_op.cc:L88
- Parameters
A (Symbol) – Tensor of input matrices
B (Symbol) – Tensor of input matrices
C (Symbol) – Tensor of input matrices
transpose_a (boolean, optional, default=0) – Multiply with transposed of first input (A).
transpose_b (boolean, optional, default=0) – Multiply with transposed of second input (B).
alpha (double, optional, default=1) – Scalar factor multiplied with A*B.
beta (double, optional, default=1) – Scalar factor multiplied with C.
axis (int, optional, default='-2') – Axis corresponding to the matrix rows.
name (string, optional.) – Name of the resulting symbol.
- Returns
The result symbol.
- Return type
-
mxnet.symbol.
linalg_gemm2
(A=None, B=None, transpose_a=_Null, transpose_b=_Null, alpha=_Null, axis=_Null, name=None, attr=None, out=None, **kwargs)¶ Performs general matrix multiplication. Input are tensors A, B, each of dimension n >= 2 and having the same shape on the leading n-2 dimensions.
If n=2, the BLAS3 function gemm is performed:
out = alpha * op(A) * op(B)
Here alpha is a scalar parameter and op() is either the identity or the matrix transposition (depending on transpose_a, transpose_b).
If n>2, gemm is performed separately for a batch of matrices. The column indices of the matrices are given by the last dimensions of the tensors, the row indices by the axis specified with the axis parameter. By default, the trailing two dimensions will be used for matrix encoding.
For a non-default axis parameter, the operation performed is equivalent to a series of swapaxes/gemm/swapaxes calls. For example let A, B be 5 dimensional tensors. Then gemm(A, B, axis=1) is equivalent to the following without the overhead of the additional swapaxis operations:
A1 = swapaxes(A, dim1=1, dim2=3) B1 = swapaxes(B, dim1=1, dim2=3) C = gemm2(A1, B1) C = swapaxis(C, dim1=1, dim2=3)
When the input data is of type float32 and the environment variables MXNET_CUDA_ALLOW_TENSOR_CORE and MXNET_CUDA_TENSOR_OP_MATH_ALLOW_CONVERSION are set to 1, this operator will try to use pseudo-float16 precision (float32 math with float16 I/O) precision in order to use Tensor Cores on suitable NVIDIA GPUs. This can sometimes give significant speedups.
Note
The operator supports float32 and float64 data types only.
Examples:
Single matrix multiply A = [[1.0, 1.0], [1.0, 1.0]] B = [[1.0, 1.0], [1.0, 1.0], [1.0, 1.0]] gemm2(A, B, transpose_b=True, alpha=2.0) = [[4.0, 4.0, 4.0], [4.0, 4.0, 4.0]] Batch matrix multiply A = [[[1.0, 1.0]], [[0.1, 0.1]]] B = [[[1.0, 1.0]], [[0.1, 0.1]]] gemm2(A, B, transpose_b=True, alpha=2.0) = [[[4.0]], [[0.04 ]]]
Defined in src/operator/tensor/la_op.cc:L162
- Parameters
A (Symbol) – Tensor of input matrices
B (Symbol) – Tensor of input matrices
transpose_a (boolean, optional, default=0) – Multiply with transposed of first input (A).
transpose_b (boolean, optional, default=0) – Multiply with transposed of second input (B).
alpha (double, optional, default=1) – Scalar factor multiplied with A*B.
axis (int, optional, default='-2') – Axis corresponding to the matrix row indices.
name (string, optional.) – Name of the resulting symbol.
- Returns
The result symbol.
- Return type
-
mxnet.symbol.
linalg_inverse
(A=None, name=None, attr=None, out=None, **kwargs)¶ Compute the inverse of a matrix. Input is a tensor A of dimension n >= 2.
If n=2, A is a square matrix. We compute:
out = A-1
If n>2, inverse is performed separately on the trailing two dimensions for all inputs (batch mode).
Note
The operator supports float32 and float64 data types only.
Examples:
Single matrix inverse A = [[1., 4.], [2., 3.]] inverse(A) = [[-0.6, 0.8], [0.4, -0.2]] Batch matrix inverse A = [[[1., 4.], [2., 3.]], [[1., 3.], [2., 4.]]] inverse(A) = [[[-0.6, 0.8], [0.4, -0.2]], [[-2., 1.5], [1., -0.5]]]
Defined in src/operator/tensor/la_op.cc:L919
-
mxnet.symbol.
linalg_makediag
(A=None, offset=_Null, name=None, attr=None, out=None, **kwargs)¶ Constructs a square matrix with the input as diagonal. Input is a tensor A of dimension n >= 1.
If n=1, then A represents the diagonal entries of a single square matrix. This matrix will be returned as a 2-dimensional tensor. If n>1, then A represents a batch of diagonals of square matrices. The batch of diagonal matrices will be returned as an n+1-dimensional tensor.
Note
The operator supports float32 and float64 data types only.
Examples:
Single diagonal matrix construction A = [1.0, 2.0] makediag(A) = [[1.0, 0.0], [0.0, 2.0]] makediag(A, 1) = [[0.0, 1.0, 0.0], [0.0, 0.0, 2.0], [0.0, 0.0, 0.0]] Batch diagonal matrix construction A = [[1.0, 2.0], [3.0, 4.0]] makediag(A) = [[[1.0, 0.0], [0.0, 2.0]], [[3.0, 0.0], [0.0, 4.0]]]
Defined in src/operator/tensor/la_op.cc:L546
- Parameters
A (Symbol) – Tensor of diagonal entries
offset (int, optional, default='0') – Offset of the diagonal versus the main diagonal. 0 corresponds to the main diagonal, a negative/positive value to diagonals below/above the main diagonal.
name (string, optional.) – Name of the resulting symbol.
- Returns
The result symbol.
- Return type
-
mxnet.symbol.
linalg_maketrian
(A=None, offset=_Null, lower=_Null, name=None, attr=None, out=None, **kwargs)¶ Constructs a square matrix with the input representing a specific triangular sub-matrix. This is basically the inverse of linalg.extracttrian. Input is a tensor A of dimension n >= 1.
If n=1, then A represents the entries of a triangular matrix which is lower triangular if offset<0 or offset=0, lower=true. The resulting matrix is derived by first constructing the square matrix with the entries outside the triangle set to zero and then adding offset-times an additional diagonal with zero entries to the square matrix.
If n>1, then A represents a batch of triangular sub-matrices. The batch of corresponding square matrices is returned as an n+1-dimensional tensor.
Note
The operator supports float32 and float64 data types only.
Examples:
Single matrix construction A = [1.0, 2.0, 3.0] maketrian(A) = [[1.0, 0.0], [2.0, 3.0]] maketrian(A, lower=false) = [[1.0, 2.0], [0.0, 3.0]] maketrian(A, offset=1) = [[0.0, 1.0, 2.0], [0.0, 0.0, 3.0], [0.0, 0.0, 0.0]] maketrian(A, offset=-1) = [[0.0, 0.0, 0.0], [1.0, 0.0, 0.0], [2.0, 3.0, 0.0]] Batch matrix construction A = [[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]] maketrian(A) = [[[1.0, 0.0], [2.0, 3.0]], [[4.0, 0.0], [5.0, 6.0]]] maketrian(A, offset=1) = [[[0.0, 1.0, 2.0], [0.0, 0.0, 3.0], [0.0, 0.0, 0.0]], [[0.0, 4.0, 5.0], [0.0, 0.0, 6.0], [0.0, 0.0, 0.0]]]
Defined in src/operator/tensor/la_op.cc:L672
- Parameters
A (Symbol) – Tensor of triangular matrices stored as vectors
offset (int, optional, default='0') – Offset of the diagonal versus the main diagonal. 0 corresponds to the main diagonal, a negative/positive value to diagonals below/above the main diagonal.
lower (boolean, optional, default=1) – Refer to the lower triangular matrix if lower=true, refer to the upper otherwise. Only relevant when offset=0
name (string, optional.) – Name of the resulting symbol.
- Returns
The result symbol.
- Return type
-
mxnet.symbol.
linalg_potrf
(A=None, name=None, attr=None, out=None, **kwargs)¶ Performs Cholesky factorization of a symmetric positive-definite matrix. Input is a tensor A of dimension n >= 2.
If n=2, the Cholesky factor B of the symmetric, positive definite matrix A is computed. B is triangular (entries of upper or lower triangle are all zero), has positive diagonal entries, and:
A = B * BT if lower = true A = BT * B if lower = false
If n>2, potrf is performed separately on the trailing two dimensions for all inputs (batch mode).
Note
The operator supports float32 and float64 data types only.
Examples:
Single matrix factorization A = [[4.0, 1.0], [1.0, 4.25]] potrf(A) = [[2.0, 0], [0.5, 2.0]] Batch matrix factorization A = [[[4.0, 1.0], [1.0, 4.25]], [[16.0, 4.0], [4.0, 17.0]]] potrf(A) = [[[2.0, 0], [0.5, 2.0]], [[4.0, 0], [1.0, 4.0]]]
Defined in src/operator/tensor/la_op.cc:L213
-
mxnet.symbol.
linalg_potri
(A=None, name=None, attr=None, out=None, **kwargs)¶ Performs matrix inversion from a Cholesky factorization. Input is a tensor A of dimension n >= 2.
If n=2, A is a triangular matrix (entries of upper or lower triangle are all zero) with positive diagonal. We compute:
out = A-T * A-1 if lower = true out = A-1 * A-T if lower = false
In other words, if A is the Cholesky factor of a symmetric positive definite matrix B (obtained by potrf), then
out = B-1
If n>2, potri is performed separately on the trailing two dimensions for all inputs (batch mode).
Note
The operator supports float32 and float64 data types only.
Note
Use this operator only if you are certain you need the inverse of B, and cannot use the Cholesky factor A (potrf), together with backsubstitution (trsm). The latter is numerically much safer, and also cheaper.
Examples:
Single matrix inverse A = [[2.0, 0], [0.5, 2.0]] potri(A) = [[0.26563, -0.0625], [-0.0625, 0.25]] Batch matrix inverse A = [[[2.0, 0], [0.5, 2.0]], [[4.0, 0], [1.0, 4.0]]] potri(A) = [[[0.26563, -0.0625], [-0.0625, 0.25]], [[0.06641, -0.01562], [-0.01562, 0,0625]]]
Defined in src/operator/tensor/la_op.cc:L274
-
mxnet.symbol.
linalg_slogdet
(A=None, name=None, attr=None, out=None, **kwargs)¶ Compute the sign and log of the determinant of a matrix. Input is a tensor A of dimension n >= 2.
If n=2, A is a square matrix. We compute:
sign = sign(det(A)) logabsdet = log(abs(det(A)))
If n>2, slogdet is performed separately on the trailing two dimensions for all inputs (batch mode).
Note
The operator supports float32 and float64 data types only.
Note
The gradient is not properly defined on sign, so the gradient of it is not backwarded.
Note
No gradient is backwarded when A is non-invertible. Please see the docs of operator det for detail.
Examples:
Single matrix signed log determinant A = [[2., 3.], [1., 4.]] sign, logabsdet = slogdet(A) sign = [1.] logabsdet = [1.609438] Batch matrix signed log determinant A = [[[2., 3.], [1., 4.]], [[1., 2.], [2., 4.]], [[1., 2.], [4., 3.]]] sign, logabsdet = slogdet(A) sign = [1., 0., -1.] logabsdet = [1.609438, -inf, 1.609438]
Defined in src/operator/tensor/la_op.cc:L1033
-
mxnet.symbol.
linalg_sumlogdiag
(A=None, name=None, attr=None, out=None, **kwargs)¶ Computes the sum of the logarithms of the diagonal elements of a square matrix. Input is a tensor A of dimension n >= 2.
If n=2, A must be square with positive diagonal entries. We sum the natural logarithms of the diagonal elements, the result has shape (1,).
If n>2, sumlogdiag is performed separately on the trailing two dimensions for all inputs (batch mode).
Note
The operator supports float32 and float64 data types only.
Examples:
Single matrix reduction A = [[1.0, 1.0], [1.0, 7.0]] sumlogdiag(A) = [1.9459] Batch matrix reduction A = [[[1.0, 1.0], [1.0, 7.0]], [[3.0, 0], [0, 17.0]]] sumlogdiag(A) = [1.9459, 3.9318]
Defined in src/operator/tensor/la_op.cc:L444
-
mxnet.symbol.
linalg_syrk
(A=None, transpose=_Null, alpha=_Null, name=None, attr=None, out=None, **kwargs)¶ Multiplication of matrix with its transpose. Input is a tensor A of dimension n >= 2.
If n=2, the operator performs the BLAS3 function syrk:
out = alpha * A * AT
if transpose=False, or
out = alpha * AT * A
if transpose=True.
If n>2, syrk is performed separately on the trailing two dimensions for all inputs (batch mode).
Note
The operator supports float32 and float64 data types only.
Examples:
Single matrix multiply A = [[1., 2., 3.], [4., 5., 6.]] syrk(A, alpha=1., transpose=False) = [[14., 32.], [32., 77.]] syrk(A, alpha=1., transpose=True) = [[17., 22., 27.], [22., 29., 36.], [27., 36., 45.]] Batch matrix multiply A = [[[1., 1.]], [[0.1, 0.1]]] syrk(A, alpha=2., transpose=False) = [[[4.]], [[0.04]]]
Defined in src/operator/tensor/la_op.cc:L729
- Parameters
A (Symbol) – Tensor of input matrices
transpose (boolean, optional, default=0) – Use transpose of input matrix.
alpha (double, optional, default=1) – Scalar factor to be applied to the result.
name (string, optional.) – Name of the resulting symbol.
- Returns
The result symbol.
- Return type
-
mxnet.symbol.
linalg_trmm
(A=None, B=None, transpose=_Null, rightside=_Null, lower=_Null, alpha=_Null, name=None, attr=None, out=None, **kwargs)¶ Performs multiplication with a lower triangular matrix. Input are tensors A, B, each of dimension n >= 2 and having the same shape on the leading n-2 dimensions.
If n=2, A must be triangular. The operator performs the BLAS3 function trmm:
out = alpha * op(A) * B
if rightside=False, or
out = alpha * B * op(A)
if rightside=True. Here, alpha is a scalar parameter, and op() is either the identity or the matrix transposition (depending on transpose).
If n>2, trmm is performed separately on the trailing two dimensions for all inputs (batch mode).
Note
The operator supports float32 and float64 data types only.
Examples:
Single triangular matrix multiply A = [[1.0, 0], [1.0, 1.0]] B = [[1.0, 1.0, 1.0], [1.0, 1.0, 1.0]] trmm(A, B, alpha=2.0) = [[2.0, 2.0, 2.0], [4.0, 4.0, 4.0]] Batch triangular matrix multiply A = [[[1.0, 0], [1.0, 1.0]], [[1.0, 0], [1.0, 1.0]]] B = [[[1.0, 1.0, 1.0], [1.0, 1.0, 1.0]], [[0.5, 0.5, 0.5], [0.5, 0.5, 0.5]]] trmm(A, B, alpha=2.0) = [[[2.0, 2.0, 2.0], [4.0, 4.0, 4.0]], [[1.0, 1.0, 1.0], [2.0, 2.0, 2.0]]]
Defined in src/operator/tensor/la_op.cc:L332
- Parameters
A (Symbol) – Tensor of lower triangular matrices
B (Symbol) – Tensor of matrices
transpose (boolean, optional, default=0) – Use transposed of the triangular matrix
rightside (boolean, optional, default=0) – Multiply triangular matrix from the right to non-triangular one.
lower (boolean, optional, default=1) – True if the triangular matrix is lower triangular, false if it is upper triangular.
alpha (double, optional, default=1) – Scalar factor to be applied to the result.
name (string, optional.) – Name of the resulting symbol.
- Returns
The result symbol.
- Return type
-
mxnet.symbol.
linalg_trsm
(A=None, B=None, transpose=_Null, rightside=_Null, lower=_Null, alpha=_Null, name=None, attr=None, out=None, **kwargs)¶ Solves matrix equation involving a lower triangular matrix. Input are tensors A, B, each of dimension n >= 2 and having the same shape on the leading n-2 dimensions.
If n=2, A must be triangular. The operator performs the BLAS3 function trsm, solving for out in:
op(A) * out = alpha * B
if rightside=False, or
out * op(A) = alpha * B
if rightside=True. Here, alpha is a scalar parameter, and op() is either the identity or the matrix transposition (depending on transpose).
If n>2, trsm is performed separately on the trailing two dimensions for all inputs (batch mode).
Note
The operator supports float32 and float64 data types only.
Examples:
Single matrix solve A = [[1.0, 0], [1.0, 1.0]] B = [[2.0, 2.0, 2.0], [4.0, 4.0, 4.0]] trsm(A, B, alpha=0.5) = [[1.0, 1.0, 1.0], [1.0, 1.0, 1.0]] Batch matrix solve A = [[[1.0, 0], [1.0, 1.0]], [[1.0, 0], [1.0, 1.0]]] B = [[[2.0, 2.0, 2.0], [4.0, 4.0, 4.0]], [[4.0, 4.0, 4.0], [8.0, 8.0, 8.0]]] trsm(A, B, alpha=0.5) = [[[1.0, 1.0, 1.0], [1.0, 1.0, 1.0]], [[2.0, 2.0, 2.0], [2.0, 2.0, 2.0]]]
Defined in src/operator/tensor/la_op.cc:L395
- Parameters
A (Symbol) – Tensor of lower triangular matrices
B (Symbol) – Tensor of matrices
transpose (boolean, optional, default=0) – Use transposed of the triangular matrix
rightside (boolean, optional, default=0) – Multiply triangular matrix from the right to non-triangular one.
lower (boolean, optional, default=1) – True if the triangular matrix is lower triangular, false if it is upper triangular.
alpha (double, optional, default=1) – Scalar factor to be applied to the result.
name (string, optional.) – Name of the resulting symbol.
- Returns
The result symbol.
- Return type
-
mxnet.symbol.
log
(data=None, name=None, attr=None, out=None, **kwargs)¶ Returns element-wise Natural logarithmic value of the input.
The natural logarithm is logarithm in base e, so that
log(exp(x)) = x
The storage type of
log
output is always denseDefined in src/operator/tensor/elemwise_unary_op_logexp.cc:L77
-
mxnet.symbol.
log10
(data=None, name=None, attr=None, out=None, **kwargs)¶ Returns element-wise Base-10 logarithmic value of the input.
10**log10(x) = x
The storage type of
log10
output is always denseDefined in src/operator/tensor/elemwise_unary_op_logexp.cc:L94
-
mxnet.symbol.
log1p
(data=None, name=None, attr=None, out=None, **kwargs)¶ Returns element-wise
log(1 + x)
value of the input.This function is more accurate than
log(1 + x)
for smallx
so that \(1+x\approx 1\)The storage type of
log1p
output depends upon the input storage type:log1p(default) = default
log1p(row_sparse) = row_sparse
log1p(csr) = csr
Defined in src/operator/tensor/elemwise_unary_op_logexp.cc:L199
-
mxnet.symbol.
log2
(data=None, name=None, attr=None, out=None, **kwargs)¶ Returns element-wise Base-2 logarithmic value of the input.
2**log2(x) = x
The storage type of
log2
output is always denseDefined in src/operator/tensor/elemwise_unary_op_logexp.cc:L106
-
mxnet.symbol.
log_softmax
(data=None, axis=_Null, temperature=_Null, dtype=_Null, use_length=_Null, name=None, attr=None, out=None, **kwargs)¶ Computes the log softmax of the input. This is equivalent to computing softmax followed by log.
Examples:
>>> x = mx.nd.array([1, 2, .1]) >>> mx.nd.log_softmax(x).asnumpy() array([-1.41702998, -0.41702995, -2.31702995], dtype=float32) >>> x = mx.nd.array( [[1, 2, .1],[.1, 2, 1]] ) >>> mx.nd.log_softmax(x, axis=0).asnumpy() array([[-0.34115392, -0.69314718, -1.24115396], [-1.24115396, -0.69314718, -0.34115392]], dtype=float32)
- Parameters
data (Symbol) – The input array.
axis (int, optional, default='-1') – The axis along which to compute softmax.
temperature (double or None, optional, default=None) – Temperature parameter in softmax
dtype ({None, 'float16', 'float32', 'float64'},optional, default='None') – DType of the output in case this can’t be inferred. Defaults to the same as input’s dtype if not defined (dtype=None).
use_length (boolean or None, optional, default=0) – Whether to use the length input as a mask over the data input.
name (string, optional.) – Name of the resulting symbol.
- Returns
The result symbol.
- Return type
-
mxnet.symbol.
logical_not
(data=None, name=None, attr=None, out=None, **kwargs)¶ Returns the result of logical NOT (!) function
Example
logical_not([-2., 0., 1.]) = [0., 1., 0.]
-
mxnet.symbol.
make_loss
(data=None, name=None, attr=None, out=None, **kwargs)¶ Make your own loss function in network construction.
This operator accepts a customized loss function symbol as a terminal loss and the symbol should be an operator with no backward dependency. The output of this function is the gradient of loss with respect to the input data.
For example, if you are a making a cross entropy loss function. Assume
out
is the predicted output andlabel
is the true label, then the cross entropy can be defined as:cross_entropy = label * log(out) + (1 - label) * log(1 - out) loss = make_loss(cross_entropy)
We will need to use
make_loss
when we are creating our own loss function or we want to combine multiple loss functions. Also we may want to stop some variables’ gradients from backpropagation. See more detail inBlockGrad
orstop_gradient
.The storage type of
make_loss
output depends upon the input storage type:make_loss(default) = default
make_loss(row_sparse) = row_sparse
Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L358
-
mxnet.symbol.
max
(data=None, axis=_Null, keepdims=_Null, exclude=_Null, name=None, attr=None, out=None, **kwargs)¶ Computes the max of array elements over given axes.
Defined in src/operator/tensor/./broadcast_reduce_op.h:L31
- Parameters
data (Symbol) – The input
axis (Shape or None, optional, default=None) –
The axis or axes along which to perform the reduction.
The default, axis=(), will compute over all elements into a scalar array with shape (1,).
If axis is int, a reduction is performed on a particular axis.
If axis is a tuple of ints, a reduction is performed on all the axes specified in the tuple.
If exclude is true, reduction will be performed on the axes that are NOT in axis instead.
Negative values means indexing from right to left.
keepdims (boolean, optional, default=0) – If this is set to True, the reduced axes are left in the result as dimension with size one.
exclude (boolean, optional, default=0) – Whether to perform reduction on axis that are NOT in axis instead.
name (string, optional.) – Name of the resulting symbol.
- Returns
The result symbol.
- Return type
-
mxnet.symbol.
max_axis
(data=None, axis=_Null, keepdims=_Null, exclude=_Null, name=None, attr=None, out=None, **kwargs)¶ Computes the max of array elements over given axes.
Defined in src/operator/tensor/./broadcast_reduce_op.h:L31
- Parameters
data (Symbol) – The input
axis (Shape or None, optional, default=None) –
The axis or axes along which to perform the reduction.
The default, axis=(), will compute over all elements into a scalar array with shape (1,).
If axis is int, a reduction is performed on a particular axis.
If axis is a tuple of ints, a reduction is performed on all the axes specified in the tuple.
If exclude is true, reduction will be performed on the axes that are NOT in axis instead.
Negative values means indexing from right to left.
keepdims (boolean, optional, default=0) – If this is set to True, the reduced axes are left in the result as dimension with size one.
exclude (boolean, optional, default=0) – Whether to perform reduction on axis that are NOT in axis instead.
name (string, optional.) – Name of the resulting symbol.
- Returns
The result symbol.
- Return type
-
mxnet.symbol.
mean
(data=None, axis=_Null, keepdims=_Null, exclude=_Null, name=None, attr=None, out=None, **kwargs)¶ Computes the mean of array elements over given axes.
Defined in src/operator/tensor/./broadcast_reduce_op.h:L83
- Parameters
data (Symbol) – The input
axis (Shape or None, optional, default=None) –
The axis or axes along which to perform the reduction.
The default, axis=(), will compute over all elements into a scalar array with shape (1,).
If axis is int, a reduction is performed on a particular axis.
If axis is a tuple of ints, a reduction is performed on all the axes specified in the tuple.
If exclude is true, reduction will be performed on the axes that are NOT in axis instead.
Negative values means indexing from right to left.
keepdims (boolean, optional, default=0) – If this is set to True, the reduced axes are left in the result as dimension with size one.
exclude (boolean, optional, default=0) – Whether to perform reduction on axis that are NOT in axis instead.
name (string, optional.) – Name of the resulting symbol.
- Returns
The result symbol.
- Return type
-
mxnet.symbol.
min
(data=None, axis=_Null, keepdims=_Null, exclude=_Null, name=None, attr=None, out=None, **kwargs)¶ Computes the min of array elements over given axes.
Defined in src/operator/tensor/./broadcast_reduce_op.h:L46
- Parameters
data (Symbol) – The input
axis (Shape or None, optional, default=None) –
The axis or axes along which to perform the reduction.
The default, axis=(), will compute over all elements into a scalar array with shape (1,).
If axis is int, a reduction is performed on a particular axis.
If axis is a tuple of ints, a reduction is performed on all the axes specified in the tuple.
If exclude is true, reduction will be performed on the axes that are NOT in axis instead.
Negative values means indexing from right to left.
keepdims (boolean, optional, default=0) – If this is set to True, the reduced axes are left in the result as dimension with size one.
exclude (boolean, optional, default=0) – Whether to perform reduction on axis that are NOT in axis instead.
name (string, optional.) – Name of the resulting symbol.
- Returns
The result symbol.
- Return type
-
mxnet.symbol.
min_axis
(data=None, axis=_Null, keepdims=_Null, exclude=_Null, name=None, attr=None, out=None, **kwargs)¶ Computes the min of array elements over given axes.
Defined in src/operator/tensor/./broadcast_reduce_op.h:L46
- Parameters
data (Symbol) – The input
axis (Shape or None, optional, default=None) –
The axis or axes along which to perform the reduction.
The default, axis=(), will compute over all elements into a scalar array with shape (1,).
If axis is int, a reduction is performed on a particular axis.
If axis is a tuple of ints, a reduction is performed on all the axes specified in the tuple.
If exclude is true, reduction will be performed on the axes that are NOT in axis instead.
Negative values means indexing from right to left.
keepdims (boolean, optional, default=0) – If this is set to True, the reduced axes are left in the result as dimension with size one.
exclude (boolean, optional, default=0) – Whether to perform reduction on axis that are NOT in axis instead.
name (string, optional.) – Name of the resulting symbol.
- Returns
The result symbol.
- Return type
-
mxnet.symbol.
moments
(data=None, axes=_Null, keepdims=_Null, name=None, attr=None, out=None, **kwargs)¶ Calculate the mean and variance of data.
The mean and variance are calculated by aggregating the contents of data across axes. If x is 1-D and axes = [0] this is just the mean and variance of a vector.
Example
x = [[1, 2, 3], [4, 5, 6]] mean, var = moments(data=x, axes=[0]) mean = [2.5, 3.5, 4.5] var = [2.25, 2.25, 2.25] mean, var = moments(data=x, axes=[1]) mean = [2.0, 5.0] var = [0.66666667, 0.66666667] mean, var = moments(data=x, axis=[0, 1]) mean = [3.5] var = [2.9166667]
Defined in src/operator/nn/moments.cc:L53
- Parameters
data (Symbol) – Input ndarray
axes (Shape or None, optional, default=None) – Array of ints. Axes along which to compute mean and variance.
keepdims (boolean, optional, default=0) – produce moments with the same dimensionality as the input.
name (string, optional.) – Name of the resulting symbol.
- Returns
The result symbol.
- Return type
-
mxnet.symbol.
mp_lamb_update_phase1
(weight=None, grad=None, mean=None, var=None, weight32=None, beta1=_Null, beta2=_Null, epsilon=_Null, t=_Null, bias_correction=_Null, wd=_Null, rescale_grad=_Null, clip_gradient=_Null, name=None, attr=None, out=None, **kwargs)¶ Mixed Precision version of Phase I of lamb update it performs the following operations and returns g:.
Link to paper: https://arxiv.org/pdf/1904.00962.pdf
\[ \begin{align}\begin{aligned}\begin{gather*} grad32 = grad(float16) * rescale_grad if (grad < -clip_gradient) then grad = -clip_gradient if (grad > clip_gradient) then grad = clip_gradient\\mean = beta1 * mean + (1 - beta1) * grad; variance = beta2 * variance + (1. - beta2) * grad ^ 2;\\if (bias_correction) then mean_hat = mean / (1. - beta1^t); var_hat = var / (1 - beta2^t); g = mean_hat / (var_hat^(1/2) + epsilon) + wd * weight32; else g = mean / (var_data^(1/2) + epsilon) + wd * weight32; \end{gather*}\end{aligned}\end{align} \]Defined in src/operator/optimizer_op.cc:L1032
- Parameters
weight (Symbol) – Weight
grad (Symbol) – Gradient
mean (Symbol) – Moving mean
var (Symbol) – Moving variance
weight32 (Symbol) – Weight32
beta1 (float, optional, default=0.899999976) – The decay rate for the 1st moment estimates.
beta2 (float, optional, default=0.999000013) – The decay rate for the 2nd moment estimates.
epsilon (float, optional, default=9.99999997e-07) – A small constant for numerical stability.
t (int, required) – Index update count.
bias_correction (boolean, optional, default=1) – Whether to use bias correction.
wd (float, required) – Weight decay augments the objective function with a regularization term that penalizes large weights. The penalty scales with the square of the magnitude of each weight.
rescale_grad (float, optional, default=1) – Rescale gradient to grad = rescale_grad*grad.
clip_gradient (float, optional, default=-1) – Clip gradient to the range of [-clip_gradient, clip_gradient] If clip_gradient <= 0, gradient clipping is turned off. grad = max(min(grad, clip_gradient), -clip_gradient).
name (string, optional.) – Name of the resulting symbol.
- Returns
The result symbol.
- Return type
-
mxnet.symbol.
mp_lamb_update_phase2
(weight=None, g=None, r1=None, r2=None, weight32=None, lr=_Null, lower_bound=_Null, upper_bound=_Null, name=None, attr=None, out=None, **kwargs)¶ Mixed Precision version Phase II of lamb update it performs the following operations and updates grad.
Link to paper: https://arxiv.org/pdf/1904.00962.pdf
\[ \begin{align}\begin{aligned}\begin{gather*} if (lower_bound >= 0) then r1 = max(r1, lower_bound) if (upper_bound >= 0) then r1 = max(r1, upper_bound)\\if (r1 == 0 or r2 == 0) then lr = lr else lr = lr * (r1/r2) weight32 = weight32 - lr * g weight(float16) = weight32 \end{gather*}\end{aligned}\end{align} \]Defined in src/operator/optimizer_op.cc:L1074
- Parameters
weight (Symbol) – Weight
g (Symbol) – Output of mp_lamb_update_phase 1
r1 (Symbol) – r1
r2 (Symbol) – r2
weight32 (Symbol) – Weight32
lr (float, required) – Learning rate
lower_bound (float, optional, default=-1) – Lower limit of norm of weight. If lower_bound <= 0, Lower limit is not set
upper_bound (float, optional, default=-1) – Upper limit of norm of weight. If upper_bound <= 0, Upper limit is not set
name (string, optional.) – Name of the resulting symbol.
- Returns
The result symbol.
- Return type
-
mxnet.symbol.
mp_nag_mom_update
(weight=None, grad=None, mom=None, weight32=None, lr=_Null, momentum=_Null, wd=_Null, rescale_grad=_Null, clip_gradient=_Null, name=None, attr=None, out=None, **kwargs)¶ Update function for multi-precision Nesterov Accelerated Gradient( NAG) optimizer.
Defined in src/operator/optimizer_op.cc:L744
- Parameters
weight (Symbol) – Weight
grad (Symbol) – Gradient
mom (Symbol) – Momentum
weight32 (Symbol) – Weight32
lr (float, required) – Learning rate
momentum (float, optional, default=0) – The decay rate of momentum estimates at each epoch.
wd (float, optional, default=0) – Weight decay augments the objective function with a regularization term that penalizes large weights. The penalty scales with the square of the magnitude of each weight.
rescale_grad (float, optional, default=1) – Rescale gradient to grad = rescale_grad*grad.
clip_gradient (float, optional, default=-1) – Clip gradient to the range of [-clip_gradient, clip_gradient] If clip_gradient <= 0, gradient clipping is turned off. grad = max(min(grad, clip_gradient), -clip_gradient).
name (string, optional.) – Name of the resulting symbol.
- Returns
The result symbol.
- Return type
-
mxnet.symbol.
mp_sgd_mom_update
(weight=None, grad=None, mom=None, weight32=None, lr=_Null, momentum=_Null, wd=_Null, rescale_grad=_Null, clip_gradient=_Null, lazy_update=_Null, name=None, attr=None, out=None, **kwargs)¶ Updater function for multi-precision sgd optimizer
- Parameters
weight (Symbol) – Weight
grad (Symbol) – Gradient
mom (Symbol) – Momentum
weight32 (Symbol) – Weight32
lr (float, required) – Learning rate
momentum (float, optional, default=0) – The decay rate of momentum estimates at each epoch.
wd (float, optional, default=0) – Weight decay augments the objective function with a regularization term that penalizes large weights. The penalty scales with the square of the magnitude of each weight.
rescale_grad (float, optional, default=1) – Rescale gradient to grad = rescale_grad*grad.
clip_gradient (float, optional, default=-1) – Clip gradient to the range of [-clip_gradient, clip_gradient] If clip_gradient <= 0, gradient clipping is turned off. grad = max(min(grad, clip_gradient), -clip_gradient).
lazy_update (boolean, optional, default=1) – If true, lazy updates are applied if gradient’s stype is row_sparse and both weight and momentum have the same stype
name (string, optional.) – Name of the resulting symbol.
- Returns
The result symbol.
- Return type
-
mxnet.symbol.
mp_sgd_update
(weight=None, grad=None, weight32=None, lr=_Null, wd=_Null, rescale_grad=_Null, clip_gradient=_Null, lazy_update=_Null, name=None, attr=None, out=None, **kwargs)¶ Updater function for multi-precision sgd optimizer
- Parameters
weight (Symbol) – Weight
grad (Symbol) – gradient
weight32 (Symbol) – Weight32
lr (float, required) – Learning rate
wd (float, optional, default=0) – Weight decay augments the objective function with a regularization term that penalizes large weights. The penalty scales with the square of the magnitude of each weight.
rescale_grad (float, optional, default=1) – Rescale gradient to grad = rescale_grad*grad.
clip_gradient (float, optional, default=-1) – Clip gradient to the range of [-clip_gradient, clip_gradient] If clip_gradient <= 0, gradient clipping is turned off. grad = max(min(grad, clip_gradient), -clip_gradient).
lazy_update (boolean, optional, default=1) – If true, lazy updates are applied if gradient’s stype is row_sparse.
name (string, optional.) – Name of the resulting symbol.
- Returns
The result symbol.
- Return type
-
mxnet.symbol.
multi_all_finite
(*data, **kwargs)¶ Check if all the float numbers in all the arrays are finite (used for AMP)
Defined in src/operator/contrib/all_finite.cc:L132
-
mxnet.symbol.
multi_lars
(lrs=None, weights_sum_sq=None, grads_sum_sq=None, wds=None, eta=_Null, eps=_Null, rescale_grad=_Null, name=None, attr=None, out=None, **kwargs)¶ Compute the LARS coefficients of multiple weights and grads from their sums of square”
Defined in src/operator/contrib/multi_lars.cc:L36
- Parameters
lrs (Symbol) – Learning rates to scale by LARS coefficient
weights_sum_sq (Symbol) – sum of square of weights arrays
grads_sum_sq (Symbol) – sum of square of gradients arrays
wds (Symbol) – weight decays
eta (float, required) – LARS eta
eps (float, required) – LARS eps
rescale_grad (float, optional, default=1) – Gradient rescaling factor
name (string, optional.) – Name of the resulting symbol.
- Returns
The result symbol.
- Return type
-
mxnet.symbol.
multi_mp_sgd_mom_update
(*data, **kwargs)¶ Momentum update function for multi-precision Stochastic Gradient Descent (SGD) optimizer.
Momentum update has better convergence rates on neural networks. Mathematically it looks like below:
\[\begin{split}v_1 = \alpha * \nabla J(W_0)\\ v_t = \gamma v_{t-1} - \alpha * \nabla J(W_{t-1})\\ W_t = W_{t-1} + v_t\end{split}\]It updates the weights using:
v = momentum * v - learning_rate * gradient weight += v
Where the parameter
momentum
is the decay rate of momentum estimates at each epoch.Defined in src/operator/optimizer_op.cc:L471
- Parameters
data (Symbol[]) – Weights
lrs (tuple of <float>, required) – Learning rates.
wds (tuple of <float>, required) – Weight decay augments the objective function with a regularization term that penalizes large weights. The penalty scales with the square of the magnitude of each weight.
momentum (float, optional, default=0) – The decay rate of momentum estimates at each epoch.
rescale_grad (float, optional, default=1) – Rescale gradient to grad = rescale_grad*grad.
clip_gradient (float, optional, default=-1) – Clip gradient to the range of [-clip_gradient, clip_gradient] If clip_gradient <= 0, gradient clipping is turned off. grad = max(min(grad, clip_gradient), -clip_gradient).
num_weights (int, optional, default='1') – Number of updated weights.
name (string, optional.) – Name of the resulting symbol.
- Returns
The result symbol.
- Return type
-
mxnet.symbol.
multi_mp_sgd_update
(*data, **kwargs)¶ Update function for multi-precision Stochastic Gradient Descent (SDG) optimizer.
It updates the weights using:
weight = weight - learning_rate * (gradient + wd * weight)
Defined in src/operator/optimizer_op.cc:L416
- Parameters
data (Symbol[]) – Weights
lrs (tuple of <float>, required) – Learning rates.
wds (tuple of <float>, required) – Weight decay augments the objective function with a regularization term that penalizes large weights. The penalty scales with the square of the magnitude of each weight.
rescale_grad (float, optional, default=1) – Rescale gradient to grad = rescale_grad*grad.
clip_gradient (float, optional, default=-1) – Clip gradient to the range of [-clip_gradient, clip_gradient] If clip_gradient <= 0, gradient clipping is turned off. grad = max(min(grad, clip_gradient), -clip_gradient).
num_weights (int, optional, default='1') – Number of updated weights.
name (string, optional.) – Name of the resulting symbol.
- Returns
The result symbol.
- Return type
-
mxnet.symbol.
multi_sgd_mom_update
(*data, **kwargs)¶ Momentum update function for Stochastic Gradient Descent (SGD) optimizer.
Momentum update has better convergence rates on neural networks. Mathematically it looks like below:
\[\begin{split}v_1 = \alpha * \nabla J(W_0)\\ v_t = \gamma v_{t-1} - \alpha * \nabla J(W_{t-1})\\ W_t = W_{t-1} + v_t\end{split}\]It updates the weights using:
v = momentum * v - learning_rate * gradient weight += v
Where the parameter
momentum
is the decay rate of momentum estimates at each epoch.Defined in src/operator/optimizer_op.cc:L373
- Parameters
data (Symbol[]) – Weights, gradients and momentum
lrs (tuple of <float>, required) – Learning rates.
wds (tuple of <float>, required) – Weight decay augments the objective function with a regularization term that penalizes large weights. The penalty scales with the square of the magnitude of each weight.
momentum (float, optional, default=0) – The decay rate of momentum estimates at each epoch.
rescale_grad (float, optional, default=1) – Rescale gradient to grad = rescale_grad*grad.
clip_gradient (float, optional, default=-1) – Clip gradient to the range of [-clip_gradient, clip_gradient] If clip_gradient <= 0, gradient clipping is turned off. grad = max(min(grad, clip_gradient), -clip_gradient).
num_weights (int, optional, default='1') – Number of updated weights.
name (string, optional.) – Name of the resulting symbol.
- Returns
The result symbol.
- Return type
-
mxnet.symbol.
multi_sgd_update
(*data, **kwargs)¶ Update function for Stochastic Gradient Descent (SDG) optimizer.
It updates the weights using:
weight = weight - learning_rate * (gradient + wd * weight)
Defined in src/operator/optimizer_op.cc:L328
- Parameters
data (Symbol[]) – Weights
lrs (tuple of <float>, required) – Learning rates.
wds (tuple of <float>, required) – Weight decay augments the objective function with a regularization term that penalizes large weights. The penalty scales with the square of the magnitude of each weight.
rescale_grad (float, optional, default=1) – Rescale gradient to grad = rescale_grad*grad.
clip_gradient (float, optional, default=-1) – Clip gradient to the range of [-clip_gradient, clip_gradient] If clip_gradient <= 0, gradient clipping is turned off. grad = max(min(grad, clip_gradient), -clip_gradient).
num_weights (int, optional, default='1') – Number of updated weights.
name (string, optional.) – Name of the resulting symbol.
- Returns
The result symbol.
- Return type
-
mxnet.symbol.
multi_sum_sq
(*data, **kwargs)¶ Compute the sums of squares of multiple arrays
Defined in src/operator/contrib/multi_sum_sq.cc:L35
-
mxnet.symbol.
nag_mom_update
(weight=None, grad=None, mom=None, lr=_Null, momentum=_Null, wd=_Null, rescale_grad=_Null, clip_gradient=_Null, name=None, attr=None, out=None, **kwargs)¶ Update function for Nesterov Accelerated Gradient( NAG) optimizer. It updates the weights using the following formula,
\[\begin{split}v_t = \gamma v_{t-1} + \eta * \nabla J(W_{t-1} - \gamma v_{t-1})\\ W_t = W_{t-1} - v_t\end{split}\]Where \(\eta\) is the learning rate of the optimizer \(\gamma\) is the decay rate of the momentum estimate \(\v_t\) is the update vector at time step t \(\W_t\) is the weight vector at time step t
Defined in src/operator/optimizer_op.cc:L725
- Parameters
weight (Symbol) – Weight
grad (Symbol) – Gradient
mom (Symbol) – Momentum
lr (float, required) – Learning rate
momentum (float, optional, default=0) – The decay rate of momentum estimates at each epoch.
wd (float, optional, default=0) – Weight decay augments the objective function with a regularization term that penalizes large weights. The penalty scales with the square of the magnitude of each weight.
rescale_grad (float, optional, default=1) – Rescale gradient to grad = rescale_grad*grad.
clip_gradient (float, optional, default=-1) – Clip gradient to the range of [-clip_gradient, clip_gradient] If clip_gradient <= 0, gradient clipping is turned off. grad = max(min(grad, clip_gradient), -clip_gradient).
name (string, optional.) – Name of the resulting symbol.
- Returns
The result symbol.
- Return type
-
mxnet.symbol.
nanprod
(data=None, axis=_Null, keepdims=_Null, exclude=_Null, name=None, attr=None, out=None, **kwargs)¶ Computes the product of array elements over given axes treating Not a Numbers (
NaN
) as one.Defined in src/operator/tensor/broadcast_reduce_prod_value.cc:L46
- Parameters
data (Symbol) – The input
axis (Shape or None, optional, default=None) –
The axis or axes along which to perform the reduction.
The default, axis=(), will compute over all elements into a scalar array with shape (1,).
If axis is int, a reduction is performed on a particular axis.
If axis is a tuple of ints, a reduction is performed on all the axes specified in the tuple.
If exclude is true, reduction will be performed on the axes that are NOT in axis instead.
Negative values means indexing from right to left.
keepdims (boolean, optional, default=0) – If this is set to True, the reduced axes are left in the result as dimension with size one.
exclude (boolean, optional, default=0) – Whether to perform reduction on axis that are NOT in axis instead.
name (string, optional.) – Name of the resulting symbol.
- Returns
The result symbol.
- Return type
-
mxnet.symbol.
nansum
(data=None, axis=_Null, keepdims=_Null, exclude=_Null, name=None, attr=None, out=None, **kwargs)¶ Computes the sum of array elements over given axes treating Not a Numbers (
NaN
) as zero.Defined in src/operator/tensor/broadcast_reduce_sum_value.cc:L101
- Parameters
data (Symbol) – The input
axis (Shape or None, optional, default=None) –
The axis or axes along which to perform the reduction.
The default, axis=(), will compute over all elements into a scalar array with shape (1,).
If axis is int, a reduction is performed on a particular axis.
If axis is a tuple of ints, a reduction is performed on all the axes specified in the tuple.
If exclude is true, reduction will be performed on the axes that are NOT in axis instead.
Negative values means indexing from right to left.
keepdims (boolean, optional, default=0) – If this is set to True, the reduced axes are left in the result as dimension with size one.
exclude (boolean, optional, default=0) – Whether to perform reduction on axis that are NOT in axis instead.
name (string, optional.) – Name of the resulting symbol.
- Returns
The result symbol.
- Return type
-
mxnet.symbol.
negative
(data=None, name=None, attr=None, out=None, **kwargs)¶ Numerical negative of the argument, element-wise.
The storage type of
negative
output depends upon the input storage type:negative(default) = default
negative(row_sparse) = row_sparse
negative(csr) = csr
-
mxnet.symbol.
norm
(data=None, ord=_Null, axis=_Null, out_dtype=_Null, keepdims=_Null, name=None, attr=None, out=None, **kwargs)¶ Computes the norm on an NDArray.
This operator computes the norm on an NDArray with the specified axis, depending on the value of the ord parameter. By default, it computes the L2 norm on the entire array. Currently only ord=2 supports sparse ndarrays.
Examples:
x = [[[1, 2], [3, 4]], [[2, 2], [5, 6]]] norm(x, ord=2, axis=1) = [[3.1622777 4.472136 ] [5.3851647 6.3245554]] norm(x, ord=1, axis=1) = [[4., 6.], [7., 8.]] rsp = x.cast_storage('row_sparse') norm(rsp) = [5.47722578] csr = x.cast_storage('csr') norm(csr) = [5.47722578]
Defined in src/operator/tensor/broadcast_reduce_norm_value.cc:L88
- Parameters
data (Symbol) – The input
ord (int, optional, default='2') – Order of the norm. Currently ord=1 and ord=2 is supported.
axis (Shape or None, optional, default=None) –
- The axis or axes along which to perform the reduction.
The default, axis=(), will compute over all elements into a scalar array with shape (1,). If axis is int, a reduction is performed on a particular axis. If axis is a 2-tuple, it specifies the axes that hold 2-D matrices, and the matrix norms of these matrices are computed.
out_dtype ({None, 'float16', 'float32', 'float64', 'int32', 'int64', 'int8'},optional, default='None') – The data type of the output.
keepdims (boolean, optional, default=0) – If this is set to True, the reduced axis is left in the result as dimension with size one.
name (string, optional.) – Name of the resulting symbol.
- Returns
The result symbol.
- Return type
-
mxnet.symbol.
normal
(loc=_Null, scale=_Null, shape=_Null, ctx=_Null, dtype=_Null, name=None, attr=None, out=None, **kwargs)¶ Draw random samples from a normal (Gaussian) distribution.
Note
The existing alias
normal
is deprecated.Samples are distributed according to a normal distribution parametrized by loc (mean) and scale (standard deviation).
Example:
normal(loc=0, scale=1, shape=(2,2)) = [[ 1.89171135, -1.16881478], [-1.23474145, 1.55807114]]
Defined in src/operator/random/sample_op.cc:L112
- Parameters
loc (float, optional, default=0) – Mean of the distribution.
scale (float, optional, default=1) – Standard deviation of the distribution.
shape (Shape(tuple), optional, default=None) – Shape of the output.
ctx (string, optional, default='') – Context of output, in format [cpu|gpu|cpu_pinned](n). Only used for imperative calls.
dtype ({'None', 'float16', 'float32', 'float64'},optional, default='None') – DType of the output in case this can’t be inferred. Defaults to float32 if not defined (dtype=None).
name (string, optional.) – Name of the resulting symbol.
- Returns
The result symbol.
- Return type
-
mxnet.symbol.
one_hot
(indices=None, depth=_Null, on_value=_Null, off_value=_Null, dtype=_Null, name=None, attr=None, out=None, **kwargs)¶ Returns a one-hot array.
The locations represented by indices take value on_value, while all other locations take value off_value.
one_hot operation with indices of shape
(i0, i1)
and depth ofd
would result in an output array of shape(i0, i1, d)
with:output[i,j,:] = off_value output[i,j,indices[i,j]] = on_value
Examples:
one_hot([1,0,2,0], 3) = [[ 0. 1. 0.] [ 1. 0. 0.] [ 0. 0. 1.] [ 1. 0. 0.]] one_hot([1,0,2,0], 3, on_value=8, off_value=1, dtype='int32') = [[1 8 1] [8 1 1] [1 1 8] [8 1 1]] one_hot([[1,0],[1,0],[2,0]], 3) = [[[ 0. 1. 0.] [ 1. 0. 0.]] [[ 0. 1. 0.] [ 1. 0. 0.]] [[ 0. 0. 1.] [ 1. 0. 0.]]]
Defined in src/operator/tensor/indexing_op.cc:L882
- Parameters
indices (Symbol) – array of locations where to set on_value
depth (int, required) – Depth of the one hot dimension.
on_value (double, optional, default=1) – The value assigned to the locations represented by indices.
off_value (double, optional, default=0) – The value assigned to the locations not represented by indices.
dtype ({'bfloat16', 'float16', 'float32', 'float64', 'int32', 'int64', 'int8', 'uint8'},optional, default='float32') – DType of the output
name (string, optional.) – Name of the resulting symbol.
- Returns
The result symbol.
- Return type
-
mxnet.symbol.
ones_like
(data=None, name=None, attr=None, out=None, **kwargs)¶ Return an array of ones with the same shape and type as the input array.
Examples:
x = [[ 0., 0., 0.], [ 0., 0., 0.]] ones_like(x) = [[ 1., 1., 1.], [ 1., 1., 1.]]
-
mxnet.symbol.
pad
(data=None, mode=_Null, pad_width=_Null, constant_value=_Null, name=None, attr=None, out=None, **kwargs)¶ Pads an input array with a constant or edge values of the array.
Note
Pad is deprecated. Use pad instead.
Note
Current implementation only supports 4D and 5D input arrays with padding applied only on axes 1, 2 and 3. Expects axes 4 and 5 in pad_width to be zero.
This operation pads an input array with either a constant_value or edge values along each axis of the input array. The amount of padding is specified by pad_width.
pad_width is a tuple of integer padding widths for each axis of the format
(before_1, after_1, ... , before_N, after_N)
. The pad_width should be of length2*N
whereN
is the number of dimensions of the array.For dimension
N
of the input array,before_N
andafter_N
indicates how many values to add before and after the elements of the array along dimensionN
. The widths of the higher two dimensionsbefore_1
,after_1
,before_2
,after_2
must be 0.Example:
x = [[[[ 1. 2. 3.] [ 4. 5. 6.]] [[ 7. 8. 9.] [ 10. 11. 12.]]] [[[ 11. 12. 13.] [ 14. 15. 16.]] [[ 17. 18. 19.] [ 20. 21. 22.]]]] pad(x,mode="edge", pad_width=(0,0,0,0,1,1,1,1)) = [[[[ 1. 1. 2. 3. 3.] [ 1. 1. 2. 3. 3.] [ 4. 4. 5. 6. 6.] [ 4. 4. 5. 6. 6.]] [[ 7. 7. 8. 9. 9.] [ 7. 7. 8. 9. 9.] [ 10. 10. 11. 12. 12.] [ 10. 10. 11. 12. 12.]]] [[[ 11. 11. 12. 13. 13.] [ 11. 11. 12. 13. 13.] [ 14. 14. 15. 16. 16.] [ 14. 14. 15. 16. 16.]] [[ 17. 17. 18. 19. 19.] [ 17. 17. 18. 19. 19.] [ 20. 20. 21. 22. 22.] [ 20. 20. 21. 22. 22.]]]] pad(x, mode="constant", constant_value=0, pad_width=(0,0,0,0,1,1,1,1)) = [[[[ 0. 0. 0. 0. 0.] [ 0. 1. 2. 3. 0.] [ 0. 4. 5. 6. 0.] [ 0. 0. 0. 0. 0.]] [[ 0. 0. 0. 0. 0.] [ 0. 7. 8. 9. 0.] [ 0. 10. 11. 12. 0.] [ 0. 0. 0. 0. 0.]]] [[[ 0. 0. 0. 0. 0.] [ 0. 11. 12. 13. 0.] [ 0. 14. 15. 16. 0.] [ 0. 0. 0. 0. 0.]] [[ 0. 0. 0. 0. 0.] [ 0. 17. 18. 19. 0.] [ 0. 20. 21. 22. 0.] [ 0. 0. 0. 0. 0.]]]]
Defined in src/operator/pad.cc:L765
- Parameters
data (Symbol) – An n-dimensional input array.
mode ({'constant', 'edge', 'reflect'}, required) – Padding type to use. “constant” pads with constant_value “edge” pads using the edge values of the input array “reflect” pads by reflecting values with respect to the edges.
pad_width (Shape(tuple), required) – Widths of the padding regions applied to the edges of each axis. It is a tuple of integer padding widths for each axis of the format
(before_1, after_1, ... , before_N, after_N)
. It should be of length2*N
whereN
is the number of dimensions of the array.This is equivalent to pad_width in numpy.pad, but flattened.constant_value (double, optional, default=0) – The value used for padding when mode is “constant”.
name (string, optional.) – Name of the resulting symbol.
- Returns
The result symbol.
- Return type
-
mxnet.symbol.
pick
(data=None, index=None, axis=_Null, keepdims=_Null, mode=_Null, name=None, attr=None, out=None, **kwargs)¶ Picks elements from an input array according to the input indices along the given axis.
Given an input array of shape
(d0, d1)
and indices of shape(i0,)
, the result will be an output array of shape(i0,)
with:output[i] = input[i, indices[i]]
By default, if any index mentioned is too large, it is replaced by the index that addresses the last element along an axis (the clip mode).
This function supports n-dimensional input and (n-1)-dimensional indices arrays.
Examples:
x = [[ 1., 2.], [ 3., 4.], [ 5., 6.]] // picks elements with specified indices along axis 0 pick(x, y=[0,1], 0) = [ 1., 4.] // picks elements with specified indices along axis 1 pick(x, y=[0,1,0], 1) = [ 1., 4., 5.] // picks elements with specified indices along axis 1 using 'wrap' mode // to place indicies that would normally be out of bounds pick(x, y=[2,-1,-2], 1, mode='wrap') = [ 1., 4., 5.] y = [[ 1.], [ 0.], [ 2.]] // picks elements with specified indices along axis 1 and dims are maintained pick(x, y, 1, keepdims=True) = [[ 2.], [ 3.], [ 6.]]
Defined in src/operator/tensor/broadcast_reduce_op_index.cc:L150
- Parameters
data (Symbol) – The input array
index (Symbol) – The index array
axis (int or None, optional, default='-1') – int or None. The axis to picking the elements. Negative values means indexing from right to left. If is None, the elements in the index w.r.t the flattened input will be picked.
keepdims (boolean, optional, default=0) – If true, the axis where we pick the elements is left in the result as dimension with size one.
mode ({'clip', 'wrap'},optional, default='clip') – Specify how out-of-bound indices behave. Default is “clip”. “clip” means clip to the range. So, if all indices mentioned are too large, they are replaced by the index that addresses the last element along an axis. “wrap” means to wrap around.
name (string, optional.) – Name of the resulting symbol.
- Returns
The result symbol.
- Return type
-
mxnet.symbol.
preloaded_multi_mp_sgd_mom_update
(*data, **kwargs)¶ Momentum update function for multi-precision Stochastic Gradient Descent (SGD) optimizer.
Momentum update has better convergence rates on neural networks. Mathematically it looks like below:
\[\begin{split}v_1 = \alpha * \nabla J(W_0)\\ v_t = \gamma v_{t-1} - \alpha * \nabla J(W_{t-1})\\ W_t = W_{t-1} + v_t\end{split}\]It updates the weights using:
v = momentum * v - learning_rate * gradient weight += v
Where the parameter
momentum
is the decay rate of momentum estimates at each epoch.Defined in src/operator/contrib/preloaded_multi_sgd.cc:L199
- Parameters
data (Symbol[]) – Weights, gradients, momentums, learning rates and weight decays
momentum (float, optional, default=0) – The decay rate of momentum estimates at each epoch.
rescale_grad (float, optional, default=1) – Rescale gradient to grad = rescale_grad*grad.
clip_gradient (float, optional, default=-1) – Clip gradient to the range of [-clip_gradient, clip_gradient] If clip_gradient <= 0, gradient clipping is turned off. grad = max(min(grad, clip_gradient), -clip_gradient).
num_weights (int, optional, default='1') – Number of updated weights.
name (string, optional.) – Name of the resulting symbol.
- Returns
The result symbol.
- Return type
-
mxnet.symbol.
preloaded_multi_mp_sgd_update
(*data, **kwargs)¶ Update function for multi-precision Stochastic Gradient Descent (SDG) optimizer.
It updates the weights using:
weight = weight - learning_rate * (gradient + wd * weight)
Defined in src/operator/contrib/preloaded_multi_sgd.cc:L139
- Parameters
data (Symbol[]) – Weights, gradients, learning rates and weight decays
rescale_grad (float, optional, default=1) – Rescale gradient to grad = rescale_grad*grad.
clip_gradient (float, optional, default=-1) – Clip gradient to the range of [-clip_gradient, clip_gradient] If clip_gradient <= 0, gradient clipping is turned off. grad = max(min(grad, clip_gradient), -clip_gradient).
num_weights (int, optional, default='1') – Number of updated weights.
name (string, optional.) – Name of the resulting symbol.
- Returns
The result symbol.
- Return type
-
mxnet.symbol.
preloaded_multi_sgd_mom_update
(*data, **kwargs)¶ Momentum update function for Stochastic Gradient Descent (SGD) optimizer.
Momentum update has better convergence rates on neural networks. Mathematically it looks like below:
\[\begin{split}v_1 = \alpha * \nabla J(W_0)\\ v_t = \gamma v_{t-1} - \alpha * \nabla J(W_{t-1})\\ W_t = W_{t-1} + v_t\end{split}\]It updates the weights using:
v = momentum * v - learning_rate * gradient weight += v
Where the parameter
momentum
is the decay rate of momentum estimates at each epoch.Defined in src/operator/contrib/preloaded_multi_sgd.cc:L90
- Parameters
data (Symbol[]) – Weights, gradients, momentum, learning rates and weight decays
momentum (float, optional, default=0) – The decay rate of momentum estimates at each epoch.
rescale_grad (float, optional, default=1) – Rescale gradient to grad = rescale_grad*grad.
clip_gradient (float, optional, default=-1) – Clip gradient to the range of [-clip_gradient, clip_gradient] If clip_gradient <= 0, gradient clipping is turned off. grad = max(min(grad, clip_gradient), -clip_gradient).
num_weights (int, optional, default='1') – Number of updated weights.
name (string, optional.) – Name of the resulting symbol.
- Returns
The result symbol.
- Return type
-
mxnet.symbol.
preloaded_multi_sgd_update
(*data, **kwargs)¶ Update function for Stochastic Gradient Descent (SDG) optimizer.
It updates the weights using:
weight = weight - learning_rate * (gradient + wd * weight)
Defined in src/operator/contrib/preloaded_multi_sgd.cc:L41
- Parameters
data (Symbol[]) – Weights, gradients, learning rates and weight decays
rescale_grad (float, optional, default=1) – Rescale gradient to grad = rescale_grad*grad.
clip_gradient (float, optional, default=-1) – Clip gradient to the range of [-clip_gradient, clip_gradient] If clip_gradient <= 0, gradient clipping is turned off. grad = max(min(grad, clip_gradient), -clip_gradient).
num_weights (int, optional, default='1') – Number of updated weights.
name (string, optional.) – Name of the resulting symbol.
- Returns
The result symbol.
- Return type
-
mxnet.symbol.
prod
(data=None, axis=_Null, keepdims=_Null, exclude=_Null, name=None, attr=None, out=None, **kwargs)¶ Computes the product of array elements over given axes.
Defined in src/operator/tensor/./broadcast_reduce_op.h:L30
- Parameters
data (Symbol) – The input
axis (Shape or None, optional, default=None) –
The axis or axes along which to perform the reduction.
The default, axis=(), will compute over all elements into a scalar array with shape (1,).
If axis is int, a reduction is performed on a particular axis.
If axis is a tuple of ints, a reduction is performed on all the axes specified in the tuple.
If exclude is true, reduction will be performed on the axes that are NOT in axis instead.
Negative values means indexing from right to left.
keepdims (boolean, optional, default=0) – If this is set to True, the reduced axes are left in the result as dimension with size one.
exclude (boolean, optional, default=0) – Whether to perform reduction on axis that are NOT in axis instead.
name (string, optional.) – Name of the resulting symbol.
- Returns
The result symbol.
- Return type
-
mxnet.symbol.
radians
(data=None, name=None, attr=None, out=None, **kwargs)¶ Converts each element of the input array from degrees to radians.
\[radians([0, 90, 180, 270, 360]) = [0, \pi/2, \pi, 3\pi/2, 2\pi]\]The storage type of
radians
output depends upon the input storage type:radians(default) = default
radians(row_sparse) = row_sparse
radians(csr) = csr
Defined in src/operator/tensor/elemwise_unary_op_trig.cc:L351
-
mxnet.symbol.
random_exponential
(lam=_Null, shape=_Null, ctx=_Null, dtype=_Null, name=None, attr=None, out=None, **kwargs)¶ Draw random samples from an exponential distribution.
Samples are distributed according to an exponential distribution parametrized by lambda (rate).
Example:
exponential(lam=4, shape=(2,2)) = [[ 0.0097189 , 0.08999364], [ 0.04146638, 0.31715935]]
Defined in src/operator/random/sample_op.cc:L136
- Parameters
lam (float, optional, default=1) – Lambda parameter (rate) of the exponential distribution.
shape (Shape(tuple), optional, default=None) – Shape of the output.
ctx (string, optional, default='') – Context of output, in format [cpu|gpu|cpu_pinned](n). Only used for imperative calls.
dtype ({'None', 'float16', 'float32', 'float64'},optional, default='None') – DType of the output in case this can’t be inferred. Defaults to float32 if not defined (dtype=None).
name (string, optional.) – Name of the resulting symbol.
- Returns
The result symbol.
- Return type
-
mxnet.symbol.
random_gamma
(alpha=_Null, beta=_Null, shape=_Null, ctx=_Null, dtype=_Null, name=None, attr=None, out=None, **kwargs)¶ Draw random samples from a gamma distribution.
Samples are distributed according to a gamma distribution parametrized by alpha (shape) and beta (scale).
Example:
gamma(alpha=9, beta=0.5, shape=(2,2)) = [[ 7.10486984, 3.37695289], [ 3.91697288, 3.65933681]]
Defined in src/operator/random/sample_op.cc:L124
- Parameters
alpha (float, optional, default=1) – Alpha parameter (shape) of the gamma distribution.
beta (float, optional, default=1) – Beta parameter (scale) of the gamma distribution.
shape (Shape(tuple), optional, default=None) – Shape of the output.
ctx (string, optional, default='') – Context of output, in format [cpu|gpu|cpu_pinned](n). Only used for imperative calls.
dtype ({'None', 'float16', 'float32', 'float64'},optional, default='None') – DType of the output in case this can’t be inferred. Defaults to float32 if not defined (dtype=None).
name (string, optional.) – Name of the resulting symbol.
- Returns
The result symbol.
- Return type
-
mxnet.symbol.
random_generalized_negative_binomial
(mu=_Null, alpha=_Null, shape=_Null, ctx=_Null, dtype=_Null, name=None, attr=None, out=None, **kwargs)¶ Draw random samples from a generalized negative binomial distribution.
Samples are distributed according to a generalized negative binomial distribution parametrized by mu (mean) and alpha (dispersion). alpha is defined as 1/k where k is the failure limit of the number of unsuccessful experiments (generalized to real numbers). Samples will always be returned as a floating point data type.
Example:
generalized_negative_binomial(mu=2.0, alpha=0.3, shape=(2,2)) = [[ 2., 1.], [ 6., 4.]]
Defined in src/operator/random/sample_op.cc:L178
- Parameters
mu (float, optional, default=1) – Mean of the negative binomial distribution.
alpha (float, optional, default=1) – Alpha (dispersion) parameter of the negative binomial distribution.
shape (Shape(tuple), optional, default=None) – Shape of the output.
ctx (string, optional, default='') – Context of output, in format [cpu|gpu|cpu_pinned](n). Only used for imperative calls.
dtype ({'None', 'float16', 'float32', 'float64'},optional, default='None') – DType of the output in case this can’t be inferred. Defaults to float32 if not defined (dtype=None).
name (string, optional.) – Name of the resulting symbol.
- Returns
The result symbol.
- Return type
-
mxnet.symbol.
random_negative_binomial
(k=_Null, p=_Null, shape=_Null, ctx=_Null, dtype=_Null, name=None, attr=None, out=None, **kwargs)¶ Draw random samples from a negative binomial distribution.
Samples are distributed according to a negative binomial distribution parametrized by k (limit of unsuccessful experiments) and p (failure probability in each experiment). Samples will always be returned as a floating point data type.
Example:
negative_binomial(k=3, p=0.4, shape=(2,2)) = [[ 4., 7.], [ 2., 5.]]
Defined in src/operator/random/sample_op.cc:L163
- Parameters
k (int, optional, default='1') – Limit of unsuccessful experiments.
p (float, optional, default=1) – Failure probability in each experiment.
shape (Shape(tuple), optional, default=None) – Shape of the output.
ctx (string, optional, default='') – Context of output, in format [cpu|gpu|cpu_pinned](n). Only used for imperative calls.
dtype ({'None', 'float16', 'float32', 'float64'},optional, default='None') – DType of the output in case this can’t be inferred. Defaults to float32 if not defined (dtype=None).
name (string, optional.) – Name of the resulting symbol.
- Returns
The result symbol.
- Return type
-
mxnet.symbol.
random_normal
(loc=_Null, scale=_Null, shape=_Null, ctx=_Null, dtype=_Null, name=None, attr=None, out=None, **kwargs)¶ Draw random samples from a normal (Gaussian) distribution.
Note
The existing alias
normal
is deprecated.Samples are distributed according to a normal distribution parametrized by loc (mean) and scale (standard deviation).
Example:
normal(loc=0, scale=1, shape=(2,2)) = [[ 1.89171135, -1.16881478], [-1.23474145, 1.55807114]]
Defined in src/operator/random/sample_op.cc:L112
- Parameters
loc (float, optional, default=0) – Mean of the distribution.
scale (float, optional, default=1) – Standard deviation of the distribution.
shape (Shape(tuple), optional, default=None) – Shape of the output.
ctx (string, optional, default='') – Context of output, in format [cpu|gpu|cpu_pinned](n). Only used for imperative calls.
dtype ({'None', 'float16', 'float32', 'float64'},optional, default='None') – DType of the output in case this can’t be inferred. Defaults to float32 if not defined (dtype=None).
name (string, optional.) – Name of the resulting symbol.
- Returns
The result symbol.
- Return type
-
mxnet.symbol.
random_pdf_dirichlet
(sample=None, alpha=None, is_log=_Null, name=None, attr=None, out=None, **kwargs)¶ Computes the value of the PDF of sample of Dirichlet distributions with parameter alpha.
The shape of alpha must match the leftmost subshape of sample. That is, sample can have the same shape as alpha, in which case the output contains one density per distribution, or sample can be a tensor of tensors with that shape, in which case the output is a tensor of densities such that the densities at index i in the output are given by the samples at index i in sample parameterized by the value of alpha at index i.
Examples:
random_pdf_dirichlet(sample=[[1,2],[2,3],[3,4]], alpha=[2.5, 2.5]) = [38.413498, 199.60245, 564.56085] sample = [[[1, 2, 3], [10, 20, 30], [100, 200, 300]], [[0.1, 0.2, 0.3], [0.01, 0.02, 0.03], [0.001, 0.002, 0.003]]] random_pdf_dirichlet(sample=sample, alpha=[0.1, 0.4, 0.9]) = [[2.3257459e-02, 5.8420084e-04, 1.4674458e-05], [9.2589635e-01, 3.6860607e+01, 1.4674468e+03]]
Defined in src/operator/random/pdf_op.cc:L315
- Parameters
- Returns
The result symbol.
- Return type
-
mxnet.symbol.
random_pdf_exponential
(sample=None, lam=None, is_log=_Null, name=None, attr=None, out=None, **kwargs)¶ Computes the value of the PDF of sample of exponential distributions with parameters lam (rate).
The shape of lam must match the leftmost subshape of sample. That is, sample can have the same shape as lam, in which case the output contains one density per distribution, or sample can be a tensor of tensors with that shape, in which case the output is a tensor of densities such that the densities at index i in the output are given by the samples at index i in sample parameterized by the value of lam at index i.
Examples:
random_pdf_exponential(sample=[[1, 2, 3]], lam=[1]) = [[0.36787945, 0.13533528, 0.04978707]] sample = [[1,2,3], [1,2,3], [1,2,3]] random_pdf_exponential(sample=sample, lam=[1,0.5,0.25]) = [[0.36787945, 0.13533528, 0.04978707], [0.30326533, 0.18393973, 0.11156508], [0.1947002, 0.15163267, 0.11809164]]
Defined in src/operator/random/pdf_op.cc:L304
- Parameters
- Returns
The result symbol.
- Return type
-
mxnet.symbol.
random_pdf_gamma
(sample=None, alpha=None, beta=None, is_log=_Null, name=None, attr=None, out=None, **kwargs)¶ Computes the value of the PDF of sample of gamma distributions with parameters alpha (shape) and beta (rate).
alpha and beta must have the same shape, which must match the leftmost subshape of sample. That is, sample can have the same shape as alpha and beta, in which case the output contains one density per distribution, or sample can be a tensor of tensors with that shape, in which case the output is a tensor of densities such that the densities at index i in the output are given by the samples at index i in sample parameterized by the values of alpha and beta at index i.
Examples:
random_pdf_gamma(sample=[[1,2,3,4,5]], alpha=[5], beta=[1]) = [[0.01532831, 0.09022352, 0.16803136, 0.19536681, 0.17546739]] sample = [[1, 2, 3, 4, 5], [2, 3, 4, 5, 6], [3, 4, 5, 6, 7]] random_pdf_gamma(sample=sample, alpha=[5,6,7], beta=[1,1,1]) = [[0.01532831, 0.09022352, 0.16803136, 0.19536681, 0.17546739], [0.03608941, 0.10081882, 0.15629345, 0.17546739, 0.16062315], [0.05040941, 0.10419563, 0.14622283, 0.16062315, 0.14900276]]
Defined in src/operator/random/pdf_op.cc:L302
- Parameters
sample (Symbol) – Samples from the distributions.
alpha (Symbol) – Alpha (shape) parameters of the distributions.
is_log (boolean, optional, default=0) – If set, compute the density of the log-probability instead of the probability.
beta (Symbol) – Beta (scale) parameters of the distributions.
name (string, optional.) – Name of the resulting symbol.
- Returns
The result symbol.
- Return type
-
mxnet.symbol.
random_pdf_generalized_negative_binomial
(sample=None, mu=None, alpha=None, is_log=_Null, name=None, attr=None, out=None, **kwargs)¶ Computes the value of the PDF of sample of generalized negative binomial distributions with parameters mu (mean) and alpha (dispersion). This can be understood as a reparameterization of the negative binomial, where k = 1 / alpha and p = 1 / (mu * alpha + 1).
mu and alpha must have the same shape, which must match the leftmost subshape of sample. That is, sample can have the same shape as mu and alpha, in which case the output contains one density per distribution, or sample can be a tensor of tensors with that shape, in which case the output is a tensor of densities such that the densities at index i in the output are given by the samples at index i in sample parameterized by the values of mu and alpha at index i.
Examples:
random_pdf_generalized_negative_binomial(sample=[[1, 2, 3, 4]], alpha=[1], mu=[1]) = [[0.25, 0.125, 0.0625, 0.03125]] sample = [[1,2,3,4], [1,2,3,4]] random_pdf_generalized_negative_binomial(sample=sample, alpha=[1, 0.6666], mu=[1, 1.5]) = [[0.25, 0.125, 0.0625, 0.03125 ], [0.26517063, 0.16573331, 0.09667706, 0.05437994]]
Defined in src/operator/random/pdf_op.cc:L313
- Parameters
sample (Symbol) – Samples from the distributions.
mu (Symbol) – Means of the distributions.
is_log (boolean, optional, default=0) – If set, compute the density of the log-probability instead of the probability.
alpha (Symbol) – Alpha (dispersion) parameters of the distributions.
name (string, optional.) – Name of the resulting symbol.
- Returns
The result symbol.
- Return type
-
mxnet.symbol.
random_pdf_negative_binomial
(sample=None, k=None, p=None, is_log=_Null, name=None, attr=None, out=None, **kwargs)¶ Computes the value of the PDF of samples of negative binomial distributions with parameters k (failure limit) and p (failure probability).
k and p must have the same shape, which must match the leftmost subshape of sample. That is, sample can have the same shape as k and p, in which case the output contains one density per distribution, or sample can be a tensor of tensors with that shape, in which case the output is a tensor of densities such that the densities at index i in the output are given by the samples at index i in sample parameterized by the values of k and p at index i.
Examples:
random_pdf_negative_binomial(sample=[[1,2,3,4]], k=[1], p=a[0.5]) = [[0.25, 0.125, 0.0625, 0.03125]] # Note that k may be real-valued sample = [[1,2,3,4], [1,2,3,4]] random_pdf_negative_binomial(sample=sample, k=[1, 1.5], p=[0.5, 0.5]) = [[0.25, 0.125, 0.0625, 0.03125 ], [0.26516506, 0.16572815, 0.09667476, 0.05437956]]
Defined in src/operator/random/pdf_op.cc:L309
- Parameters
sample (Symbol) – Samples from the distributions.
k (Symbol) – Limits of unsuccessful experiments.
is_log (boolean, optional, default=0) – If set, compute the density of the log-probability instead of the probability.
p (Symbol) – Failure probabilities in each experiment.
name (string, optional.) – Name of the resulting symbol.
- Returns
The result symbol.
- Return type
-
mxnet.symbol.
random_pdf_normal
(sample=None, mu=None, sigma=None, is_log=_Null, name=None, attr=None, out=None, **kwargs)¶ Computes the value of the PDF of sample of normal distributions with parameters mu (mean) and sigma (standard deviation).
mu and sigma must have the same shape, which must match the leftmost subshape of sample. That is, sample can have the same shape as mu and sigma, in which case the output contains one density per distribution, or sample can be a tensor of tensors with that shape, in which case the output is a tensor of densities such that the densities at index i in the output are given by the samples at index i in sample parameterized by the values of mu and sigma at index i.
Examples:
sample = [[-2, -1, 0, 1, 2]] random_pdf_normal(sample=sample, mu=[0], sigma=[1]) = [[0.05399097, 0.24197073, 0.3989423, 0.24197073, 0.05399097]] random_pdf_normal(sample=sample*2, mu=[0,0], sigma=[1,2]) = [[0.05399097, 0.24197073, 0.3989423, 0.24197073, 0.05399097], [0.12098537, 0.17603266, 0.19947115, 0.17603266, 0.12098537]]
Defined in src/operator/random/pdf_op.cc:L299
- Parameters
sample (Symbol) – Samples from the distributions.
mu (Symbol) – Means of the distributions.
is_log (boolean, optional, default=0) – If set, compute the density of the log-probability instead of the probability.
sigma (Symbol) – Standard deviations of the distributions.
name (string, optional.) – Name of the resulting symbol.
- Returns
The result symbol.
- Return type
-
mxnet.symbol.
random_pdf_poisson
(sample=None, lam=None, is_log=_Null, name=None, attr=None, out=None, **kwargs)¶ Computes the value of the PDF of sample of Poisson distributions with parameters lam (rate).
The shape of lam must match the leftmost subshape of sample. That is, sample can have the same shape as lam, in which case the output contains one density per distribution, or sample can be a tensor of tensors with that shape, in which case the output is a tensor of densities such that the densities at index i in the output are given by the samples at index i in sample parameterized by the value of lam at index i.
Examples:
random_pdf_poisson(sample=[[0,1,2,3]], lam=[1]) = [[0.36787945, 0.36787945, 0.18393973, 0.06131324]] sample = [[0,1,2,3], [0,1,2,3], [0,1,2,3]] random_pdf_poisson(sample=sample, lam=[1,2,3]) = [[0.36787945, 0.36787945, 0.18393973, 0.06131324], [0.13533528, 0.27067056, 0.27067056, 0.18044704], [0.04978707, 0.14936121, 0.22404182, 0.22404182]]
Defined in src/operator/random/pdf_op.cc:L306
- Parameters
- Returns
The result symbol.
- Return type
-
mxnet.symbol.
random_pdf_uniform
(sample=None, low=None, high=None, is_log=_Null, name=None, attr=None, out=None, **kwargs)¶ Computes the value of the PDF of sample of uniform distributions on the intervals given by [low,high).
low and high must have the same shape, which must match the leftmost subshape of sample. That is, sample can have the same shape as low and high, in which case the output contains one density per distribution, or sample can be a tensor of tensors with that shape, in which case the output is a tensor of densities such that the densities at index i in the output are given by the samples at index i in sample parameterized by the values of low and high at index i.
Examples:
random_pdf_uniform(sample=[[1,2,3,4]], low=[0], high=[10]) = [0.1, 0.1, 0.1, 0.1] sample = [[[1, 2, 3], [1, 2, 3]], [[1, 2, 3], [1, 2, 3]]] low = [[0, 0], [0, 0]] high = [[ 5, 10], [15, 20]] random_pdf_uniform(sample=sample, low=low, high=high) = [[[0.2, 0.2, 0.2 ], [0.1, 0.1, 0.1 ]], [[0.06667, 0.06667, 0.06667], [0.05, 0.05, 0.05 ]]]
Defined in src/operator/random/pdf_op.cc:L297
- Parameters
sample (Symbol) – Samples from the distributions.
low (Symbol) – Lower bounds of the distributions.
is_log (boolean, optional, default=0) – If set, compute the density of the log-probability instead of the probability.
high (Symbol) – Upper bounds of the distributions.
name (string, optional.) – Name of the resulting symbol.
- Returns
The result symbol.
- Return type
-
mxnet.symbol.
random_poisson
(lam=_Null, shape=_Null, ctx=_Null, dtype=_Null, name=None, attr=None, out=None, **kwargs)¶ Draw random samples from a Poisson distribution.
Samples are distributed according to a Poisson distribution parametrized by lambda (rate). Samples will always be returned as a floating point data type.
Example:
poisson(lam=4, shape=(2,2)) = [[ 5., 2.], [ 4., 6.]]
Defined in src/operator/random/sample_op.cc:L149
- Parameters
lam (float, optional, default=1) – Lambda parameter (rate) of the Poisson distribution.
shape (Shape(tuple), optional, default=None) – Shape of the output.
ctx (string, optional, default='') – Context of output, in format [cpu|gpu|cpu_pinned](n). Only used for imperative calls.
dtype ({'None', 'float16', 'float32', 'float64'},optional, default='None') – DType of the output in case this can’t be inferred. Defaults to float32 if not defined (dtype=None).
name (string, optional.) – Name of the resulting symbol.
- Returns
The result symbol.
- Return type
-
mxnet.symbol.
random_randint
(low=_Null, high=_Null, shape=_Null, ctx=_Null, dtype=_Null, name=None, attr=None, out=None, **kwargs)¶ Draw random samples from a discrete uniform distribution.
Samples are uniformly distributed over the half-open interval [low, high) (includes low, but excludes high).
Example:
randint(low=0, high=5, shape=(2,2)) = [[ 0, 2], [ 3, 1]]
Defined in src/operator/random/sample_op.cc:L193
- Parameters
low (long, required) – Lower bound of the distribution.
high (long, required) – Upper bound of the distribution.
shape (Shape(tuple), optional, default=None) – Shape of the output.
ctx (string, optional, default='') – Context of output, in format [cpu|gpu|cpu_pinned](n). Only used for imperative calls.
dtype ({'None', 'int32', 'int64'},optional, default='None') – DType of the output in case this can’t be inferred. Defaults to int32 if not defined (dtype=None).
name (string, optional.) – Name of the resulting symbol.
- Returns
The result symbol.
- Return type
-
mxnet.symbol.
random_uniform
(low=_Null, high=_Null, shape=_Null, ctx=_Null, dtype=_Null, name=None, attr=None, out=None, **kwargs)¶ Draw random samples from a uniform distribution.
Note
The existing alias
uniform
is deprecated.Samples are uniformly distributed over the half-open interval [low, high) (includes low, but excludes high).
Example:
uniform(low=0, high=1, shape=(2,2)) = [[ 0.60276335, 0.85794562], [ 0.54488319, 0.84725171]]
Defined in src/operator/random/sample_op.cc:L95
- Parameters
low (float, optional, default=0) – Lower bound of the distribution.
high (float, optional, default=1) – Upper bound of the distribution.
shape (Shape(tuple), optional, default=None) – Shape of the output.
ctx (string, optional, default='') – Context of output, in format [cpu|gpu|cpu_pinned](n). Only used for imperative calls.
dtype ({'None', 'float16', 'float32', 'float64'},optional, default='None') – DType of the output in case this can’t be inferred. Defaults to float32 if not defined (dtype=None).
name (string, optional.) – Name of the resulting symbol.
- Returns
The result symbol.
- Return type
-
mxnet.symbol.
ravel_multi_index
(data=None, shape=_Null, name=None, attr=None, out=None, **kwargs)¶ Converts a batch of index arrays into an array of flat indices. The operator follows numpy conventions so a single multi index is given by a column of the input matrix. The leading dimension may be left unspecified by using -1 as placeholder.
Examples:
A = [[3,6,6],[4,5,1]] ravel(A, shape=(7,6)) = [22,41,37] ravel(A, shape=(-1,6)) = [22,41,37]
Defined in src/operator/tensor/ravel.cc:L41
-
mxnet.symbol.
rcbrt
(data=None, name=None, attr=None, out=None, **kwargs)¶ Returns element-wise inverse cube-root value of the input.
\[rcbrt(x) = 1/\sqrt[3]{x}\]Example:
rcbrt([1,8,-125]) = [1.0, 0.5, -0.2]
Defined in src/operator/tensor/elemwise_unary_op_pow.cc:L323
-
mxnet.symbol.
reciprocal
(data=None, name=None, attr=None, out=None, **kwargs)¶ Returns the reciprocal of the argument, element-wise.
Calculates 1/x.
Example:
reciprocal([-2, 1, 3, 1.6, 0.2]) = [-0.5, 1.0, 0.33333334, 0.625, 5.0]
Defined in src/operator/tensor/elemwise_unary_op_pow.cc:L43
-
mxnet.symbol.
relu
(data=None, name=None, attr=None, out=None, **kwargs)¶ Computes rectified linear activation.
\[max(features, 0)\]The storage type of
relu
output depends upon the input storage type:relu(default) = default
relu(row_sparse) = row_sparse
relu(csr) = csr
Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L85
-
mxnet.symbol.
repeat
(data=None, repeats=_Null, axis=_Null, name=None, attr=None, out=None, **kwargs)¶ Repeats elements of an array. By default,
repeat
flattens the input array into 1-D and then repeats the elements:x = [[ 1, 2], [ 3, 4]] repeat(x, repeats=2) = [ 1., 1., 2., 2., 3., 3., 4., 4.]
- The parameter
axis
specifies the axis along which to perform repeat:: - repeat(x, repeats=2, axis=1) = [[ 1., 1., 2., 2.],
[ 3., 3., 4., 4.]]
- repeat(x, repeats=2, axis=0) = [[ 1., 2.],
[ 1., 2.], [ 3., 4.], [ 3., 4.]]
- repeat(x, repeats=2, axis=-1) = [[ 1., 1., 2., 2.],
[ 3., 3., 4., 4.]]
Defined in src/operator/tensor/matrix_op.cc:L743
- Parameters
data (Symbol) – Input data array
repeats (int, required) – The number of repetitions for each element.
axis (int or None, optional, default='None') – The axis along which to repeat values. The negative numbers are interpreted counting from the backward. By default, use the flattened input array, and return a flat output array.
name (string, optional.) – Name of the resulting symbol.
- Returns
The result symbol.
- Return type
- The parameter
-
mxnet.symbol.
reset_arrays
(*data, **kwargs)¶ Set to zero multiple arrays
Defined in src/operator/contrib/reset_arrays.cc:L35
-
mxnet.symbol.
reshape
(data=None, shape=_Null, reverse=_Null, target_shape=_Null, keep_highest=_Null, name=None, attr=None, out=None, **kwargs)¶ Reshapes the input array. .. note::
Reshape
is deprecated, usereshape
Given an array and a shape, this function returns a copy of the array in the new shape. The shape is a tuple of integers such as (2,3,4). The size of the new shape should be same as the size of the input array. Example:reshape([1,2,3,4], shape=(2,2)) = [[1,2], [3,4]]
Some dimensions of the shape can take special values from the set {0, -1, -2, -3, -4}. The significance of each is explained below: -
0
copy this dimension from the input to the output shape.Example:: - input shape = (2,3,4), shape = (4,0,2), output shape = (4,3,2) - input shape = (2,3,4), shape = (2,0,0), output shape = (2,3,4)
-1
infers the dimension of the output shape by using the remainder of the input dimensions keeping the size of the new array same as that of the input array. At most one dimension of shape can be -1. Example:: - input shape = (2,3,4), shape = (6,1,-1), output shape = (6,1,4) - input shape = (2,3,4), shape = (3,-1,8), output shape = (3,1,8) - input shape = (2,3,4), shape=(-1,), output shape = (24,)-2
copy all/remainder of the input dimensions to the output shape. Example:: - input shape = (2,3,4), shape = (-2,), output shape = (2,3,4) - input shape = (2,3,4), shape = (2,-2), output shape = (2,3,4) - input shape = (2,3,4), shape = (-2,1,1), output shape = (2,3,4,1,1)-3
use the product of two consecutive dimensions of the input shape as the output dimension. Example:: - input shape = (2,3,4), shape = (-3,4), output shape = (6,4) - input shape = (2,3,4,5), shape = (-3,-3), output shape = (6,20) - input shape = (2,3,4), shape = (0,-3), output shape = (2,12) - input shape = (2,3,4), shape = (-3,-2), output shape = (6,4)-4
split one dimension of the input into two dimensions passed subsequent to -4 in shape (can contain -1). Example:: - input shape = (2,3,4), shape = (-4,1,2,-2), output shape =(1,2,3,4) - input shape = (2,3,4), shape = (2,-4,-1,3,-2), output shape = (2,1,3,4)
- If the argument reverse is set to 1, then the special values are inferred from right to left.
Example:: - without reverse=1, for input shape = (10,5,4), shape = (-1,0), output shape would be (40,5) - with reverse=1, output shape will be (50,4).
Defined in src/operator/tensor/matrix_op.cc:L174
- Parameters
data (Symbol) – Input data to reshape.
shape (Shape(tuple), optional, default=[]) – The target shape
reverse (boolean, optional, default=0) – If true then the special values are inferred from right to left
target_shape (Shape(tuple), optional, default=[]) – (Deprecated! Use
shape
instead.) Target new shape. One and only one dim can be 0, in which case it will be inferred from the rest of dimskeep_highest (boolean, optional, default=0) – (Deprecated! Use
shape
instead.) Whether keep the highest dim unchanged.If set to true, then the first dim in target_shape is ignored,and always fixed as inputname (string, optional.) – Name of the resulting symbol.
- Returns
The result symbol.
- Return type
-
mxnet.symbol.
reshape_like
(lhs=None, rhs=None, lhs_begin=_Null, lhs_end=_Null, rhs_begin=_Null, rhs_end=_Null, name=None, attr=None, out=None, **kwargs)¶ Reshape some or all dimensions of lhs to have the same shape as some or all dimensions of rhs.
Returns a view of the lhs array with a new shape without altering any data.
Example:
x = [1, 2, 3, 4, 5, 6] y = [[0, -4], [3, 2], [2, 2]] reshape_like(x, y) = [[1, 2], [3, 4], [5, 6]]
More precise control over how dimensions are inherited is achieved by specifying slices over the lhs and rhs array dimensions. Only the sliced lhs dimensions are reshaped to the rhs sliced dimensions, with the non-sliced lhs dimensions staying the same.
Examples:
- lhs shape = (30,7), rhs shape = (15,2,4), lhs_begin=0, lhs_end=1, rhs_begin=0, rhs_end=2, output shape = (15,2,7) - lhs shape = (3, 5), rhs shape = (1,15,4), lhs_begin=0, lhs_end=2, rhs_begin=1, rhs_end=2, output shape = (15)
Negative indices are supported, and None can be used for either lhs_end or rhs_end to indicate the end of the range.
Example:
- lhs shape = (30, 12), rhs shape = (4, 2, 2, 3), lhs_begin=-1, lhs_end=None, rhs_begin=1, rhs_end=None, output shape = (30, 2, 2, 3)
Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L511
- Parameters
lhs (Symbol) – First input.
rhs (Symbol) – Second input.
lhs_begin (int or None, optional, default='None') – Defaults to 0. The beginning index along which the lhs dimensions are to be reshaped. Supports negative indices.
lhs_end (int or None, optional, default='None') – Defaults to None. The ending index along which the lhs dimensions are to be used for reshaping. Supports negative indices.
rhs_begin (int or None, optional, default='None') – Defaults to 0. The beginning index along which the rhs dimensions are to be used for reshaping. Supports negative indices.
rhs_end (int or None, optional, default='None') – Defaults to None. The ending index along which the rhs dimensions are to be used for reshaping. Supports negative indices.
name (string, optional.) – Name of the resulting symbol.
- Returns
The result symbol.
- Return type
-
mxnet.symbol.
reverse
(data=None, axis=_Null, name=None, attr=None, out=None, **kwargs)¶ Reverses the order of elements along given axis while preserving array shape. Note: reverse and flip are equivalent. We use reverse in the following examples. Examples:
x = [[ 0., 1., 2., 3., 4.], [ 5., 6., 7., 8., 9.]] reverse(x, axis=0) = [[ 5., 6., 7., 8., 9.], [ 0., 1., 2., 3., 4.]] reverse(x, axis=1) = [[ 4., 3., 2., 1., 0.], [ 9., 8., 7., 6., 5.]]
Defined in src/operator/tensor/matrix_op.cc:L831
-
mxnet.symbol.
rint
(data=None, name=None, attr=None, out=None, **kwargs)¶ Returns element-wise rounded value to the nearest integer of the input.
Note
For input
n.5
rint
returnsn
whileround
returnsn+1
.For input
-n.5
bothrint
andround
returns-n-1
.
Example:
rint([-1.5, 1.5, -1.9, 1.9, 2.1]) = [-2., 1., -2., 2., 2.]
The storage type of
rint
output depends upon the input storage type:rint(default) = default
rint(row_sparse) = row_sparse
rint(csr) = csr
Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L798
-
mxnet.symbol.
rmsprop_update
(weight=None, grad=None, n=None, lr=_Null, gamma1=_Null, epsilon=_Null, wd=_Null, rescale_grad=_Null, clip_gradient=_Null, clip_weights=_Null, name=None, attr=None, out=None, **kwargs)¶ Update function for RMSProp optimizer.
RMSprop is a variant of stochastic gradient descent where the gradients are divided by a cache which grows with the sum of squares of recent gradients?
RMSProp is similar to AdaGrad, a popular variant of SGD which adaptively tunes the learning rate of each parameter. AdaGrad lowers the learning rate for each parameter monotonically over the course of training. While this is analytically motivated for convex optimizations, it may not be ideal for non-convex problems. RMSProp deals with this heuristically by allowing the learning rates to rebound as the denominator decays over time.
Define the Root Mean Square (RMS) error criterion of the gradient as \(RMS[g]_t = \sqrt{E[g^2]_t + \epsilon}\), where \(g\) represents gradient and \(E[g^2]_t\) is the decaying average over past squared gradient.
The \(E[g^2]_t\) is given by:
\[E[g^2]_t = \gamma * E[g^2]_{t-1} + (1-\gamma) * g_t^2\]The update step is
\[\theta_{t+1} = \theta_t - \frac{\eta}{RMS[g]_t} g_t\]The RMSProp code follows the version in http://www.cs.toronto.edu/~tijmen/csc321/slides/lecture_slides_lec6.pdf Tieleman & Hinton, 2012.
Hinton suggests the momentum term \(\gamma\) to be 0.9 and the learning rate \(\eta\) to be 0.001.
Defined in src/operator/optimizer_op.cc:L796
- Parameters
weight (Symbol) – Weight
grad (Symbol) – Gradient
n (Symbol) – n
lr (float, required) – Learning rate
gamma1 (float, optional, default=0.949999988) – The decay rate of momentum estimates.
epsilon (float, optional, default=9.99999994e-09) – A small constant for numerical stability.
wd (float, optional, default=0) – Weight decay augments the objective function with a regularization term that penalizes large weights. The penalty scales with the square of the magnitude of each weight.
rescale_grad (float, optional, default=1) – Rescale gradient to grad = rescale_grad*grad.
clip_gradient (float, optional, default=-1) – Clip gradient to the range of [-clip_gradient, clip_gradient] If clip_gradient <= 0, gradient clipping is turned off. grad = max(min(grad, clip_gradient), -clip_gradient).
clip_weights (float, optional, default=-1) – Clip weights to the range of [-clip_weights, clip_weights] If clip_weights <= 0, weight clipping is turned off. weights = max(min(weights, clip_weights), -clip_weights).
name (string, optional.) – Name of the resulting symbol.
- Returns
The result symbol.
- Return type
-
mxnet.symbol.
rmspropalex_update
(weight=None, grad=None, n=None, g=None, delta=None, lr=_Null, gamma1=_Null, gamma2=_Null, epsilon=_Null, wd=_Null, rescale_grad=_Null, clip_gradient=_Null, clip_weights=_Null, name=None, attr=None, out=None, **kwargs)¶ Update function for RMSPropAlex optimizer.
RMSPropAlex is non-centered version of RMSProp.
Define \(E[g^2]_t\) is the decaying average over past squared gradient and \(E[g]_t\) is the decaying average over past gradient.
\[\begin{split}E[g^2]_t = \gamma_1 * E[g^2]_{t-1} + (1 - \gamma_1) * g_t^2\\ E[g]_t = \gamma_1 * E[g]_{t-1} + (1 - \gamma_1) * g_t\\ \Delta_t = \gamma_2 * \Delta_{t-1} - \frac{\eta}{\sqrt{E[g^2]_t - E[g]_t^2 + \epsilon}} g_t\\\end{split}\]The update step is
\[\theta_{t+1} = \theta_t + \Delta_t\]The RMSPropAlex code follows the version in http://arxiv.org/pdf/1308.0850v5.pdf Eq(38) - Eq(45) by Alex Graves, 2013.
Graves suggests the momentum term \(\gamma_1\) to be 0.95, \(\gamma_2\) to be 0.9 and the learning rate \(\eta\) to be 0.0001.
Defined in src/operator/optimizer_op.cc:L835
- Parameters
weight (Symbol) – Weight
grad (Symbol) – Gradient
n (Symbol) – n
g (Symbol) – g
delta (Symbol) – delta
lr (float, required) – Learning rate
gamma1 (float, optional, default=0.949999988) – Decay rate.
gamma2 (float, optional, default=0.899999976) – Decay rate.
epsilon (float, optional, default=9.99999994e-09) – A small constant for numerical stability.
wd (float, optional, default=0) – Weight decay augments the objective function with a regularization term that penalizes large weights. The penalty scales with the square of the magnitude of each weight.
rescale_grad (float, optional, default=1) – Rescale gradient to grad = rescale_grad*grad.
clip_gradient (float, optional, default=-1) – Clip gradient to the range of [-clip_gradient, clip_gradient] If clip_gradient <= 0, gradient clipping is turned off. grad = max(min(grad, clip_gradient), -clip_gradient).
clip_weights (float, optional, default=-1) – Clip weights to the range of [-clip_weights, clip_weights] If clip_weights <= 0, weight clipping is turned off. weights = max(min(weights, clip_weights), -clip_weights).
name (string, optional.) – Name of the resulting symbol.
- Returns
The result symbol.
- Return type
-
mxnet.symbol.
round
(data=None, name=None, attr=None, out=None, **kwargs)¶ Returns element-wise rounded value to the nearest integer of the input.
Example:
round([-1.5, 1.5, -1.9, 1.9, 2.1]) = [-2., 2., -2., 2., 2.]
The storage type of
round
output depends upon the input storage type:round(default) = default
round(row_sparse) = row_sparse
round(csr) = csr
Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L777
-
mxnet.symbol.
rsqrt
(data=None, name=None, attr=None, out=None, **kwargs)¶ Returns element-wise inverse square-root value of the input.
\[rsqrt(x) = 1/\sqrt{x}\]Example:
rsqrt([4,9,16]) = [0.5, 0.33333334, 0.25]
The storage type of
rsqrt
output is always denseDefined in src/operator/tensor/elemwise_unary_op_pow.cc:L221
-
mxnet.symbol.
sample_exponential
(lam=None, shape=_Null, dtype=_Null, name=None, attr=None, out=None, **kwargs)¶ Concurrent sampling from multiple exponential distributions with parameters lambda (rate).
The parameters of the distributions are provided as an input array. Let [s] be the shape of the input array, n be the dimension of [s], [t] be the shape specified as the parameter of the operator, and m be the dimension of [t]. Then the output will be a (n+m)-dimensional array with shape [s]x[t].
For any valid n-dimensional index i with respect to the input array, output[i] will be an m-dimensional array that holds randomly drawn samples from the distribution which is parameterized by the input value at index i. If the shape parameter of the operator is not set, then one sample will be drawn per distribution and the output array has the same shape as the input array.
Examples:
lam = [ 1.0, 8.5 ] // Draw a single sample for each distribution sample_exponential(lam) = [ 0.51837951, 0.09994757] // Draw a vector containing two samples for each distribution sample_exponential(lam, shape=(2)) = [[ 0.51837951, 0.19866663], [ 0.09994757, 0.50447971]]
Defined in src/operator/random/multisample_op.cc:L283
- Parameters
lam (Symbol) – Lambda (rate) parameters of the distributions.
shape (Shape(tuple), optional, default=[]) – Shape to be sampled from each random distribution.
dtype ({'None', 'float16', 'float32', 'float64'},optional, default='None') – DType of the output in case this can’t be inferred. Defaults to float32 if not defined (dtype=None).
name (string, optional.) – Name of the resulting symbol.
- Returns
The result symbol.
- Return type
-
mxnet.symbol.
sample_gamma
(alpha=None, beta=None, shape=_Null, dtype=_Null, name=None, attr=None, out=None, **kwargs)¶ Concurrent sampling from multiple gamma distributions with parameters alpha (shape) and beta (scale).
The parameters of the distributions are provided as input arrays. Let [s] be the shape of the input arrays, n be the dimension of [s], [t] be the shape specified as the parameter of the operator, and m be the dimension of [t]. Then the output will be a (n+m)-dimensional array with shape [s]x[t].
For any valid n-dimensional index i with respect to the input arrays, output[i] will be an m-dimensional array that holds randomly drawn samples from the distribution which is parameterized by the input values at index i. If the shape parameter of the operator is not set, then one sample will be drawn per distribution and the output array has the same shape as the input arrays.
Examples:
alpha = [ 0.0, 2.5 ] beta = [ 1.0, 0.7 ] // Draw a single sample for each distribution sample_gamma(alpha, beta) = [ 0. , 2.25797319] // Draw a vector containing two samples for each distribution sample_gamma(alpha, beta, shape=(2)) = [[ 0. , 0. ], [ 2.25797319, 1.70734084]]
Defined in src/operator/random/multisample_op.cc:L281
- Parameters
alpha (Symbol) – Alpha (shape) parameters of the distributions.
shape (Shape(tuple), optional, default=[]) – Shape to be sampled from each random distribution.
dtype ({'None', 'float16', 'float32', 'float64'},optional, default='None') – DType of the output in case this can’t be inferred. Defaults to float32 if not defined (dtype=None).
beta (Symbol) – Beta (scale) parameters of the distributions.
name (string, optional.) – Name of the resulting symbol.
- Returns
The result symbol.
- Return type
-
mxnet.symbol.
sample_generalized_negative_binomial
(mu=None, alpha=None, shape=_Null, dtype=_Null, name=None, attr=None, out=None, **kwargs)¶ Concurrent sampling from multiple generalized negative binomial distributions with parameters mu (mean) and alpha (dispersion).
The parameters of the distributions are provided as input arrays. Let [s] be the shape of the input arrays, n be the dimension of [s], [t] be the shape specified as the parameter of the operator, and m be the dimension of [t]. Then the output will be a (n+m)-dimensional array with shape [s]x[t].
For any valid n-dimensional index i with respect to the input arrays, output[i] will be an m-dimensional array that holds randomly drawn samples from the distribution which is parameterized by the input values at index i. If the shape parameter of the operator is not set, then one sample will be drawn per distribution and the output array has the same shape as the input arrays.
Samples will always be returned as a floating point data type.
Examples:
mu = [ 2.0, 2.5 ] alpha = [ 1.0, 0.1 ] // Draw a single sample for each distribution sample_generalized_negative_binomial(mu, alpha) = [ 0., 3.] // Draw a vector containing two samples for each distribution sample_generalized_negative_binomial(mu, alpha, shape=(2)) = [[ 0., 3.], [ 3., 1.]]
Defined in src/operator/random/multisample_op.cc:L292
- Parameters
mu (Symbol) – Means of the distributions.
shape (Shape(tuple), optional, default=[]) – Shape to be sampled from each random distribution.
dtype ({'None', 'float16', 'float32', 'float64'},optional, default='None') – DType of the output in case this can’t be inferred. Defaults to float32 if not defined (dtype=None).
alpha (Symbol) – Alpha (dispersion) parameters of the distributions.
name (string, optional.) – Name of the resulting symbol.
- Returns
The result symbol.
- Return type
-
mxnet.symbol.
sample_multinomial
(data=None, shape=_Null, get_prob=_Null, dtype=_Null, name=None, attr=None, out=None, **kwargs)¶ Concurrent sampling from multiple multinomial distributions.
data is an n dimensional array whose last dimension has length k, where k is the number of possible outcomes of each multinomial distribution. This operator will draw shape samples from each distribution. If shape is empty one sample will be drawn from each distribution.
If get_prob is true, a second array containing log likelihood of the drawn samples will also be returned. This is usually used for reinforcement learning where you can provide reward as head gradient for this array to estimate gradient.
Note that the input distribution must be normalized, i.e. data must sum to 1 along its last axis.
Examples:
probs = [[0, 0.1, 0.2, 0.3, 0.4], [0.4, 0.3, 0.2, 0.1, 0]] // Draw a single sample for each distribution sample_multinomial(probs) = [3, 0] // Draw a vector containing two samples for each distribution sample_multinomial(probs, shape=(2)) = [[4, 2], [0, 0]] // requests log likelihood sample_multinomial(probs, get_prob=True) = [2, 1], [0.2, 0.3]
- Parameters
data (Symbol) – Distribution probabilities. Must sum to one on the last axis.
shape (Shape(tuple), optional, default=[]) – Shape to be sampled from each random distribution.
get_prob (boolean, optional, default=0) – Whether to also return the log probability of sampled result. This is usually used for differentiating through stochastic variables, e.g. in reinforcement learning.
dtype ({'float16', 'float32', 'float64', 'int32', 'uint8'},optional, default='int32') – DType of the output in case this can’t be inferred.
name (string, optional.) – Name of the resulting symbol.
- Returns
The result symbol.
- Return type
-
mxnet.symbol.
sample_negative_binomial
(k=None, p=None, shape=_Null, dtype=_Null, name=None, attr=None, out=None, **kwargs)¶ Concurrent sampling from multiple negative binomial distributions with parameters k (failure limit) and p (failure probability).
The parameters of the distributions are provided as input arrays. Let [s] be the shape of the input arrays, n be the dimension of [s], [t] be the shape specified as the parameter of the operator, and m be the dimension of [t]. Then the output will be a (n+m)-dimensional array with shape [s]x[t].
For any valid n-dimensional index i with respect to the input arrays, output[i] will be an m-dimensional array that holds randomly drawn samples from the distribution which is parameterized by the input values at index i. If the shape parameter of the operator is not set, then one sample will be drawn per distribution and the output array has the same shape as the input arrays.
Samples will always be returned as a floating point data type.
Examples:
k = [ 20, 49 ] p = [ 0.4 , 0.77 ] // Draw a single sample for each distribution sample_negative_binomial(k, p) = [ 15., 16.] // Draw a vector containing two samples for each distribution sample_negative_binomial(k, p, shape=(2)) = [[ 15., 50.], [ 16., 12.]]
Defined in src/operator/random/multisample_op.cc:L288
- Parameters
k (Symbol) – Limits of unsuccessful experiments.
shape (Shape(tuple), optional, default=[]) – Shape to be sampled from each random distribution.
dtype ({'None', 'float16', 'float32', 'float64'},optional, default='None') – DType of the output in case this can’t be inferred. Defaults to float32 if not defined (dtype=None).
p (Symbol) – Failure probabilities in each experiment.
name (string, optional.) – Name of the resulting symbol.
- Returns
The result symbol.
- Return type
-
mxnet.symbol.
sample_normal
(mu=None, sigma=None, shape=_Null, dtype=_Null, name=None, attr=None, out=None, **kwargs)¶ Concurrent sampling from multiple normal distributions with parameters mu (mean) and sigma (standard deviation).
The parameters of the distributions are provided as input arrays. Let [s] be the shape of the input arrays, n be the dimension of [s], [t] be the shape specified as the parameter of the operator, and m be the dimension of [t]. Then the output will be a (n+m)-dimensional array with shape [s]x[t].
For any valid n-dimensional index i with respect to the input arrays, output[i] will be an m-dimensional array that holds randomly drawn samples from the distribution which is parameterized by the input values at index i. If the shape parameter of the operator is not set, then one sample will be drawn per distribution and the output array has the same shape as the input arrays.
Examples:
mu = [ 0.0, 2.5 ] sigma = [ 1.0, 3.7 ] // Draw a single sample for each distribution sample_normal(mu, sigma) = [-0.56410581, 0.95934606] // Draw a vector containing two samples for each distribution sample_normal(mu, sigma, shape=(2)) = [[-0.56410581, 0.2928229 ], [ 0.95934606, 4.48287058]]
Defined in src/operator/random/multisample_op.cc:L278
- Parameters
mu (Symbol) – Means of the distributions.
shape (Shape(tuple), optional, default=[]) – Shape to be sampled from each random distribution.
dtype ({'None', 'float16', 'float32', 'float64'},optional, default='None') – DType of the output in case this can’t be inferred. Defaults to float32 if not defined (dtype=None).
sigma (Symbol) – Standard deviations of the distributions.
name (string, optional.) – Name of the resulting symbol.
- Returns
The result symbol.
- Return type
-
mxnet.symbol.
sample_poisson
(lam=None, shape=_Null, dtype=_Null, name=None, attr=None, out=None, **kwargs)¶ Concurrent sampling from multiple Poisson distributions with parameters lambda (rate).
The parameters of the distributions are provided as an input array. Let [s] be the shape of the input array, n be the dimension of [s], [t] be the shape specified as the parameter of the operator, and m be the dimension of [t]. Then the output will be a (n+m)-dimensional array with shape [s]x[t].
For any valid n-dimensional index i with respect to the input array, output[i] will be an m-dimensional array that holds randomly drawn samples from the distribution which is parameterized by the input value at index i. If the shape parameter of the operator is not set, then one sample will be drawn per distribution and the output array has the same shape as the input array.
Samples will always be returned as a floating point data type.
Examples:
lam = [ 1.0, 8.5 ] // Draw a single sample for each distribution sample_poisson(lam) = [ 0., 13.] // Draw a vector containing two samples for each distribution sample_poisson(lam, shape=(2)) = [[ 0., 4.], [ 13., 8.]]
Defined in src/operator/random/multisample_op.cc:L285
- Parameters
lam (Symbol) – Lambda (rate) parameters of the distributions.
shape (Shape(tuple), optional, default=[]) – Shape to be sampled from each random distribution.
dtype ({'None', 'float16', 'float32', 'float64'},optional, default='None') – DType of the output in case this can’t be inferred. Defaults to float32 if not defined (dtype=None).
name (string, optional.) – Name of the resulting symbol.
- Returns
The result symbol.
- Return type
-
mxnet.symbol.
sample_uniform
(low=None, high=None, shape=_Null, dtype=_Null, name=None, attr=None, out=None, **kwargs)¶ Concurrent sampling from multiple uniform distributions on the intervals given by [low,high).
The parameters of the distributions are provided as input arrays. Let [s] be the shape of the input arrays, n be the dimension of [s], [t] be the shape specified as the parameter of the operator, and m be the dimension of [t]. Then the output will be a (n+m)-dimensional array with shape [s]x[t].
For any valid n-dimensional index i with respect to the input arrays, output[i] will be an m-dimensional array that holds randomly drawn samples from the distribution which is parameterized by the input values at index i. If the shape parameter of the operator is not set, then one sample will be drawn per distribution and the output array has the same shape as the input arrays.
Examples:
low = [ 0.0, 2.5 ] high = [ 1.0, 3.7 ] // Draw a single sample for each distribution sample_uniform(low, high) = [ 0.40451524, 3.18687344] // Draw a vector containing two samples for each distribution sample_uniform(low, high, shape=(2)) = [[ 0.40451524, 0.18017688], [ 3.18687344, 3.68352246]]
Defined in src/operator/random/multisample_op.cc:L276
- Parameters
low (Symbol) – Lower bounds of the distributions.
shape (Shape(tuple), optional, default=[]) – Shape to be sampled from each random distribution.
dtype ({'None', 'float16', 'float32', 'float64'},optional, default='None') – DType of the output in case this can’t be inferred. Defaults to float32 if not defined (dtype=None).
high (Symbol) – Upper bounds of the distributions.
name (string, optional.) – Name of the resulting symbol.
- Returns
The result symbol.
- Return type
-
mxnet.symbol.
scatter_nd
(data=None, indices=None, shape=_Null, name=None, attr=None, out=None, **kwargs)¶ Scatters data into a new tensor according to indices.
Given data with shape (Y_0, …, Y_{K-1}, X_M, …, X_{N-1}) and indices with shape (M, Y_0, …, Y_{K-1}), the output will have shape (X_0, X_1, …, X_{N-1}), where M <= N. If M == N, data shape should simply be (Y_0, …, Y_{K-1}).
The elements in output is defined as follows:
output[indices[0, y_0, ..., y_{K-1}], ..., indices[M-1, y_0, ..., y_{K-1}], x_M, ..., x_{N-1}] = data[y_0, ..., y_{K-1}, x_M, ..., x_{N-1}]
all other entries in output are 0.
Warning
If the indices have duplicates, the result will be non-deterministic and the gradient of scatter_nd will not be correct!!
Examples:
data = [2, 3, 0] indices = [[1, 1, 0], [0, 1, 0]] shape = (2, 2) scatter_nd(data, indices, shape) = [[0, 0], [2, 3]] data = [[[1, 2], [3, 4]], [[5, 6], [7, 8]]] indices = [[0, 1], [1, 1]] shape = (2, 2, 2, 2) scatter_nd(data, indices, shape) = [[[[0, 0], [0, 0]], [[1, 2], [3, 4]]], [[[0, 0], [0, 0]], [[5, 6], [7, 8]]]]
-
mxnet.symbol.
sgd_mom_update
(weight=None, grad=None, mom=None, lr=_Null, momentum=_Null, wd=_Null, rescale_grad=_Null, clip_gradient=_Null, lazy_update=_Null, name=None, attr=None, out=None, **kwargs)¶ Momentum update function for Stochastic Gradient Descent (SGD) optimizer.
Momentum update has better convergence rates on neural networks. Mathematically it looks like below:
\[\begin{split}v_1 = \alpha * \nabla J(W_0)\\ v_t = \gamma v_{t-1} - \alpha * \nabla J(W_{t-1})\\ W_t = W_{t-1} + v_t\end{split}\]It updates the weights using:
v = momentum * v - learning_rate * gradient weight += v
Where the parameter
momentum
is the decay rate of momentum estimates at each epoch.However, if grad’s storage type is
row_sparse
,lazy_update
is True and weight’s storage type is the same as momentum’s storage type, only the row slices whose indices appear in grad.indices are updated (for both weight and momentum):for row in gradient.indices: v[row] = momentum[row] * v[row] - learning_rate * gradient[row] weight[row] += v[row]
Defined in src/operator/optimizer_op.cc:L564
- Parameters
weight (Symbol) – Weight
grad (Symbol) – Gradient
mom (Symbol) – Momentum
lr (float, required) – Learning rate
momentum (float, optional, default=0) – The decay rate of momentum estimates at each epoch.
wd (float, optional, default=0) – Weight decay augments the objective function with a regularization term that penalizes large weights. The penalty scales with the square of the magnitude of each weight.
rescale_grad (float, optional, default=1) – Rescale gradient to grad = rescale_grad*grad.
clip_gradient (float, optional, default=-1) – Clip gradient to the range of [-clip_gradient, clip_gradient] If clip_gradient <= 0, gradient clipping is turned off. grad = max(min(grad, clip_gradient), -clip_gradient).
lazy_update (boolean, optional, default=1) – If true, lazy updates are applied if gradient’s stype is row_sparse and both weight and momentum have the same stype
name (string, optional.) – Name of the resulting symbol.
- Returns
The result symbol.
- Return type
-
mxnet.symbol.
sgd_update
(weight=None, grad=None, lr=_Null, wd=_Null, rescale_grad=_Null, clip_gradient=_Null, lazy_update=_Null, name=None, attr=None, out=None, **kwargs)¶ Update function for Stochastic Gradient Descent (SGD) optimizer.
It updates the weights using:
weight = weight - learning_rate * (gradient + wd * weight)
However, if gradient is of
row_sparse
storage type andlazy_update
is True, only the row slices whose indices appear in grad.indices are updated:for row in gradient.indices: weight[row] = weight[row] - learning_rate * (gradient[row] + wd * weight[row])
Defined in src/operator/optimizer_op.cc:L523
- Parameters
weight (Symbol) – Weight
grad (Symbol) – Gradient
lr (float, required) – Learning rate
wd (float, optional, default=0) – Weight decay augments the objective function with a regularization term that penalizes large weights. The penalty scales with the square of the magnitude of each weight.
rescale_grad (float, optional, default=1) – Rescale gradient to grad = rescale_grad*grad.
clip_gradient (float, optional, default=-1) – Clip gradient to the range of [-clip_gradient, clip_gradient] If clip_gradient <= 0, gradient clipping is turned off. grad = max(min(grad, clip_gradient), -clip_gradient).
lazy_update (boolean, optional, default=1) – If true, lazy updates are applied if gradient’s stype is row_sparse.
name (string, optional.) – Name of the resulting symbol.
- Returns
The result symbol.
- Return type
-
mxnet.symbol.
shape_array
(data=None, name=None, attr=None, out=None, **kwargs)¶ Returns a 1D int64 array containing the shape of data.
Example:
shape_array([[1,2,3,4], [5,6,7,8]]) = [2,4]
Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L573
-
mxnet.symbol.
shuffle
(data=None, name=None, attr=None, out=None, **kwargs)¶ Randomly shuffle the elements.
This shuffles the array along the first axis. The order of the elements in each subarray does not change. For example, if a 2D array is given, the order of the rows randomly changes, but the order of the elements in each row does not change.
-
mxnet.symbol.
sigmoid
(data=None, name=None, attr=None, out=None, **kwargs)¶ Computes sigmoid of x element-wise.
\[y = 1 / (1 + exp(-x))\]The storage type of
sigmoid
output is always denseDefined in src/operator/tensor/elemwise_unary_op_basic.cc:L119
-
mxnet.symbol.
sign
(data=None, name=None, attr=None, out=None, **kwargs)¶ Returns element-wise sign of the input.
Example:
sign([-2, 0, 3]) = [-1, 0, 1]
The storage type of
sign
output depends upon the input storage type:sign(default) = default
sign(row_sparse) = row_sparse
sign(csr) = csr
Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L758
-
mxnet.symbol.
signsgd_update
(weight=None, grad=None, lr=_Null, wd=_Null, rescale_grad=_Null, clip_gradient=_Null, name=None, attr=None, out=None, **kwargs)¶ Update function for SignSGD optimizer.
\[\begin{split}g_t = \nabla J(W_{t-1})\\ W_t = W_{t-1} - \eta_t \text{sign}(g_t)\end{split}\]It updates the weights using:
weight = weight - learning_rate * sign(gradient)
Note
sparse ndarray not supported for this optimizer yet.
Defined in src/operator/optimizer_op.cc:L62
- Parameters
weight (Symbol) – Weight
grad (Symbol) – Gradient
lr (float, required) – Learning rate
wd (float, optional, default=0) – Weight decay augments the objective function with a regularization term that penalizes large weights. The penalty scales with the square of the magnitude of each weight.
rescale_grad (float, optional, default=1) – Rescale gradient to grad = rescale_grad*grad.
clip_gradient (float, optional, default=-1) – Clip gradient to the range of [-clip_gradient, clip_gradient] If clip_gradient <= 0, gradient clipping is turned off. grad = max(min(grad, clip_gradient), -clip_gradient).
name (string, optional.) – Name of the resulting symbol.
- Returns
The result symbol.
- Return type
-
mxnet.symbol.
signum_update
(weight=None, grad=None, mom=None, lr=_Null, momentum=_Null, wd=_Null, rescale_grad=_Null, clip_gradient=_Null, wd_lh=_Null, name=None, attr=None, out=None, **kwargs)¶ SIGN momentUM (Signum) optimizer.
\[\begin{split}g_t = \nabla J(W_{t-1})\\ m_t = \beta m_{t-1} + (1 - \beta) g_t\\ W_t = W_{t-1} - \eta_t \text{sign}(m_t)\end{split}\]- It updates the weights using::
state = momentum * state + (1-momentum) * gradient weight = weight - learning_rate * sign(state)
Where the parameter
momentum
is the decay rate of momentum estimates at each epoch.Note
sparse ndarray not supported for this optimizer yet.
Defined in src/operator/optimizer_op.cc:L91
- Parameters
weight (Symbol) – Weight
grad (Symbol) – Gradient
mom (Symbol) – Momentum
lr (float, required) – Learning rate
momentum (float, optional, default=0) – The decay rate of momentum estimates at each epoch.
wd (float, optional, default=0) – Weight decay augments the objective function with a regularization term that penalizes large weights. The penalty scales with the square of the magnitude of each weight.
rescale_grad (float, optional, default=1) – Rescale gradient to grad = rescale_grad*grad.
clip_gradient (float, optional, default=-1) – Clip gradient to the range of [-clip_gradient, clip_gradient] If clip_gradient <= 0, gradient clipping is turned off. grad = max(min(grad, clip_gradient), -clip_gradient).
wd_lh (float, optional, default=0) – The amount of weight decay that does not go into gradient/momentum calculationsotherwise do weight decay algorithmically only.
name (string, optional.) – Name of the resulting symbol.
- Returns
The result symbol.
- Return type
-
mxnet.symbol.
sin
(data=None, name=None, attr=None, out=None, **kwargs)¶ Computes the element-wise sine of the input array.
The input should be in radians (\(2\pi\) rad equals 360 degrees).
\[sin([0, \pi/4, \pi/2]) = [0, 0.707, 1]\]The storage type of
sin
output depends upon the input storage type:sin(default) = default
sin(row_sparse) = row_sparse
sin(csr) = csr
Defined in src/operator/tensor/elemwise_unary_op_trig.cc:L47
-
mxnet.symbol.
sinh
(data=None, name=None, attr=None, out=None, **kwargs)¶ Returns the hyperbolic sine of the input array, computed element-wise.
\[sinh(x) = 0.5\times(exp(x) - exp(-x))\]The storage type of
sinh
output depends upon the input storage type:sinh(default) = default
sinh(row_sparse) = row_sparse
sinh(csr) = csr
Defined in src/operator/tensor/elemwise_unary_op_trig.cc:L371
-
mxnet.symbol.
size_array
(data=None, name=None, attr=None, out=None, **kwargs)¶ Returns a 1D int64 array containing the size of data.
Example:
size_array([[1,2,3,4], [5,6,7,8]]) = [8]
Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L624
-
mxnet.symbol.
slice
(data=None, begin=_Null, end=_Null, step=_Null, name=None, attr=None, out=None, **kwargs)¶ Slices a region of the array. .. note::
crop
is deprecated. Useslice
instead. This function returns a sliced array between the indices given by begin and end with the corresponding step. For an input array ofshape=(d_0, d_1, ..., d_n-1)
, slice operation withbegin=(b_0, b_1...b_m-1)
,end=(e_0, e_1, ..., e_m-1)
, andstep=(s_0, s_1, ..., s_m-1)
, where m <= n, results in an array with the shape(|e_0-b_0|/|s_0|, ..., |e_m-1-b_m-1|/|s_m-1|, d_m, ..., d_n-1)
. The resulting array’s k-th dimension contains elements from the k-th dimension of the input array starting from indexb_k
(inclusive) with steps_k
until reachinge_k
(exclusive). If the k-th elements are None in the sequence of begin, end, and step, the following rule will be used to set default values. If s_k is None, set s_k=1. If s_k > 0, set b_k=0, e_k=d_k; else, set b_k=d_k-1, e_k=-1. The storage type ofslice
output depends on storage types of inputs - slice(csr) = csr - otherwise,slice
generates output with default storage .. note:: When input data storage type is csr, it only supportsstep=(), or step=(None,), or step=(1,) to generate a csr output. For other step parameter values, it falls back to slicing a dense tensor.
- Example::
- x = [[ 1., 2., 3., 4.],
[ 5., 6., 7., 8.], [ 9., 10., 11., 12.]]
- slice(x, begin=(0,1), end=(2,4)) = [[ 2., 3., 4.],
[ 6., 7., 8.]]
- slice(x, begin=(None, 0), end=(None, 3), step=(-1, 2)) = [[9., 11.],
[5., 7.], [1., 3.]]
Defined in src/operator/tensor/matrix_op.cc:L481
- Parameters
data (Symbol) – Source input
begin (Shape(tuple), required) – starting indices for the slice operation, supports negative indices.
end (Shape(tuple), required) – ending indices for the slice operation, supports negative indices.
step (Shape(tuple), optional, default=[]) – step for the slice operation, supports negative values.
name (string, optional.) – Name of the resulting symbol.
- Returns
The result symbol.
- Return type
-
mxnet.symbol.
slice_axis
(data=None, axis=_Null, begin=_Null, end=_Null, name=None, attr=None, out=None, **kwargs)¶ Slices along a given axis. Returns an array slice along a given axis starting from the begin index to the end index. Examples:
x = [[ 1., 2., 3., 4.], [ 5., 6., 7., 8.], [ 9., 10., 11., 12.]] slice_axis(x, axis=0, begin=1, end=3) = [[ 5., 6., 7., 8.], [ 9., 10., 11., 12.]] slice_axis(x, axis=1, begin=0, end=2) = [[ 1., 2.], [ 5., 6.], [ 9., 10.]] slice_axis(x, axis=1, begin=-3, end=-1) = [[ 2., 3.], [ 6., 7.], [ 10., 11.]]
Defined in src/operator/tensor/matrix_op.cc:L570
- Parameters
data (Symbol) – Source input
axis (int, required) – Axis along which to be sliced, supports negative indexes.
begin (int, required) – The beginning index along the axis to be sliced, supports negative indexes.
end (int or None, required) – The ending index along the axis to be sliced, supports negative indexes.
name (string, optional.) – Name of the resulting symbol.
- Returns
The result symbol.
- Return type
-
mxnet.symbol.
slice_like
(data=None, shape_like=None, axes=_Null, name=None, attr=None, out=None, **kwargs)¶ Slices a region of the array like the shape of another array. This function is similar to
slice
, however, the begin are always 0`s and `end of specific axes are inferred from the second input shape_like. Given the second shape_like input ofshape=(d_0, d_1, ..., d_n-1)
, aslice_like
operator with default empty axes, it performs the following operation: `` out = slice(input, begin=(0, 0, …, 0), end=(d_0, d_1, …, d_n-1))``. When axes is not empty, it is used to speficy which axes are being sliced. Given a 4-d input data,slice_like
operator withaxes=(0, 2, -1)
will perform the following operation: `` out = slice(input, begin=(0, 0, 0, 0), end=(d_0, None, d_2, d_3))``. Note that it is allowed to have first and second input with different dimensions, however, you have to make sure the axes are specified and not exceeding the dimension limits. For example, given input_1 withshape=(2,3,4,5)
and input_2 withshape=(1,2,3)
, it is not allowed to use: `` out = slice_like(a, b)`` because ndim of input_1 is 4, and ndim of input_2 is 3. The following is allowed in this situation: `` out = slice_like(a, b, axes=(0, 2))`` Example:x = [[ 1., 2., 3., 4.], [ 5., 6., 7., 8.], [ 9., 10., 11., 12.]] y = [[ 0., 0., 0.], [ 0., 0., 0.]] slice_like(x, y) = [[ 1., 2., 3.] [ 5., 6., 7.]] slice_like(x, y, axes=(0, 1)) = [[ 1., 2., 3.] [ 5., 6., 7.]] slice_like(x, y, axes=(0)) = [[ 1., 2., 3., 4.] [ 5., 6., 7., 8.]] slice_like(x, y, axes=(-1)) = [[ 1., 2., 3.] [ 5., 6., 7.] [ 9., 10., 11.]]
Defined in src/operator/tensor/matrix_op.cc:L624
- Parameters
data (Symbol) – Source input
shape_like (Symbol) – Shape like input
axes (Shape(tuple), optional, default=[]) – List of axes on which input data will be sliced according to the corresponding size of the second input. By default will slice on all axes. Negative axes are supported.
name (string, optional.) – Name of the resulting symbol.
- Returns
The result symbol.
- Return type
-
mxnet.symbol.
smooth_l1
(data=None, scalar=_Null, name=None, attr=None, out=None, **kwargs)¶ Calculate Smooth L1 Loss(lhs, scalar) by summing
\[\begin{split}f(x) = \begin{cases} (\sigma x)^2/2,& \text{if }x < 1/\sigma^2\\ |x|-0.5/\sigma^2,& \text{otherwise} \end{cases}\end{split}\]where \(x\) is an element of the tensor lhs and \(\sigma\) is the scalar.
Example:
smooth_l1([1, 2, 3, 4]) = [0.5, 1.5, 2.5, 3.5] smooth_l1([1, 2, 3, 4], scalar=1) = [0.5, 1.5, 2.5, 3.5]
Defined in src/operator/tensor/elemwise_binary_scalar_op_extended.cc:L108
-
mxnet.symbol.
softmax
(data=None, length=None, axis=_Null, temperature=_Null, dtype=_Null, use_length=_Null, name=None, attr=None, out=None, **kwargs)¶ Applies the softmax function.
The resulting array contains elements in the range (0,1) and the elements along the given axis sum up to 1.
\[softmax(\mathbf{z/t})_j = \frac{e^{z_j/t}}{\sum_{k=1}^K e^{z_k/t}}\]for \(j = 1, ..., K\)
t is the temperature parameter in softmax function. By default, t equals 1.0
Example:
x = [[ 1. 1. 1.] [ 1. 1. 1.]] softmax(x,axis=0) = [[ 0.5 0.5 0.5] [ 0.5 0.5 0.5]] softmax(x,axis=1) = [[ 0.33333334, 0.33333334, 0.33333334], [ 0.33333334, 0.33333334, 0.33333334]]
Defined in src/operator/nn/softmax.cc:L135
- Parameters
data (Symbol) – The input array.
length (Symbol) – The length array.
axis (int, optional, default='-1') – The axis along which to compute softmax.
temperature (double or None, optional, default=None) – Temperature parameter in softmax
dtype ({None, 'float16', 'float32', 'float64'},optional, default='None') – DType of the output in case this can’t be inferred. Defaults to the same as input’s dtype if not defined (dtype=None).
use_length (boolean or None, optional, default=0) – Whether to use the length input as a mask over the data input.
name (string, optional.) – Name of the resulting symbol.
- Returns
The result symbol.
- Return type
-
mxnet.symbol.
softmax_cross_entropy
(data=None, label=None, name=None, attr=None, out=None, **kwargs)¶ Calculate cross entropy of softmax output and one-hot label.
This operator computes the cross entropy in two steps: - Applies softmax function on the input array. - Computes and returns the cross entropy loss between the softmax output and the labels.
The softmax function and cross entropy loss is given by:
Softmax Function:
\[\text{softmax}(x)_i = \frac{exp(x_i)}{\sum_j exp(x_j)}\]Cross Entropy Function:
\[\text{CE(label, output)} = - \sum_i \text{label}_i \log(\text{output}_i)\]
Example:
x = [[1, 2, 3], [11, 7, 5]] label = [2, 0] softmax(x) = [[0.09003057, 0.24472848, 0.66524094], [0.97962922, 0.01794253, 0.00242826]] softmax_cross_entropy(data, label) = - log(0.66524084) - log(0.97962922) = 0.4281871
Defined in src/operator/loss_binary_op.cc:L58
-
mxnet.symbol.
softmin
(data=None, axis=_Null, temperature=_Null, dtype=_Null, use_length=_Null, name=None, attr=None, out=None, **kwargs)¶ Applies the softmin function.
The resulting array contains elements in the range (0,1) and the elements along the given axis sum up to 1.
\[softmin(\mathbf{z/t})_j = \frac{e^{-z_j/t}}{\sum_{k=1}^K e^{-z_k/t}}\]for \(j = 1, ..., K\)
t is the temperature parameter in softmax function. By default, t equals 1.0
Example:
x = [[ 1. 2. 3.] [ 3. 2. 1.]] softmin(x,axis=0) = [[ 0.88079703, 0.5, 0.11920292], [ 0.11920292, 0.5, 0.88079703]] softmin(x,axis=1) = [[ 0.66524094, 0.24472848, 0.09003057], [ 0.09003057, 0.24472848, 0.66524094]]
Defined in src/operator/nn/softmin.cc:L56
- Parameters
data (Symbol) – The input array.
axis (int, optional, default='-1') – The axis along which to compute softmax.
temperature (double or None, optional, default=None) – Temperature parameter in softmax
dtype ({None, 'float16', 'float32', 'float64'},optional, default='None') – DType of the output in case this can’t be inferred. Defaults to the same as input’s dtype if not defined (dtype=None).
use_length (boolean or None, optional, default=0) – Whether to use the length input as a mask over the data input.
name (string, optional.) – Name of the resulting symbol.
- Returns
The result symbol.
- Return type
-
mxnet.symbol.
softsign
(data=None, name=None, attr=None, out=None, **kwargs)¶ Computes softsign of x element-wise.
\[y = x / (1 + abs(x))\]The storage type of
softsign
output is always denseDefined in src/operator/tensor/elemwise_unary_op_basic.cc:L191
-
mxnet.symbol.
sort
(data=None, axis=_Null, is_ascend=_Null, name=None, attr=None, out=None, **kwargs)¶ Returns a sorted copy of an input array along the given axis.
Examples:
x = [[ 1, 4], [ 3, 1]] // sorts along the last axis sort(x) = [[ 1., 4.], [ 1., 3.]] // flattens and then sorts sort(x, axis=None) = [ 1., 1., 3., 4.] // sorts along the first axis sort(x, axis=0) = [[ 1., 1.], [ 3., 4.]] // in a descend order sort(x, is_ascend=0) = [[ 4., 1.], [ 3., 1.]]
Defined in src/operator/tensor/ordering_op.cc:L132
- Parameters
data (Symbol) – The input array
axis (int or None, optional, default='-1') – Axis along which to choose sort the input tensor. If not given, the flattened array is used. Default is -1.
is_ascend (boolean, optional, default=1) – Whether to sort in ascending or descending order.
name (string, optional.) – Name of the resulting symbol.
- Returns
The result symbol.
- Return type
-
mxnet.symbol.
space_to_depth
(data=None, block_size=_Null, name=None, attr=None, out=None, **kwargs)¶ Rearranges(permutes) blocks of spatial data into depth. Similar to ONNX SpaceToDepth operator: https://github.com/onnx/onnx/blob/master/docs/Operators.md#SpaceToDepth The output is a new tensor where the values from height and width dimension are moved to the depth dimension. The reverse of this operation is
depth_to_space
. .. math:\begin{gather*} x \prime = reshape(x, [N, C, H / block\_size, block\_size, W / block\_size, block\_size]) \\ x \prime \prime = transpose(x \prime, [0, 3, 5, 1, 2, 4]) \\ y = reshape(x \prime \prime, [N, C * (block\_size ^ 2), H / block\_size, W / block\_size]) \end{gather*}
where \(x\) is an input tensor with default layout as \([N, C, H, W]\): [batch, channels, height, width] and \(y\) is the output tensor of layout \([N, C * (block\_size ^ 2), H / block\_size, W / block\_size]\) Example:
x = [[[[0, 6, 1, 7, 2, 8], [12, 18, 13, 19, 14, 20], [3, 9, 4, 10, 5, 11], [15, 21, 16, 22, 17, 23]]]] space_to_depth(x, 2) = [[[[0, 1, 2], [3, 4, 5]], [[6, 7, 8], [9, 10, 11]], [[12, 13, 14], [15, 16, 17]], [[18, 19, 20], [21, 22, 23]]]]
Defined in src/operator/tensor/matrix_op.cc:L1018
-
mxnet.symbol.
split
(data=None, num_outputs=_Null, axis=_Null, squeeze_axis=_Null, name=None, attr=None, out=None, **kwargs)¶ Splits an array along a particular axis into multiple sub-arrays.
Note
SliceChannel
is deprecated. Usesplit
instead.Note that num_outputs should evenly divide the length of the axis along which to split the array.
Example:
x = [[[ 1.] [ 2.]] [[ 3.] [ 4.]] [[ 5.] [ 6.]]] x.shape = (3, 2, 1) y = split(x, axis=1, num_outputs=2) // a list of 2 arrays with shape (3, 1, 1) y = [[[ 1.]] [[ 3.]] [[ 5.]]] [[[ 2.]] [[ 4.]] [[ 6.]]] y[0].shape = (3, 1, 1) z = split(x, axis=0, num_outputs=3) // a list of 3 arrays with shape (1, 2, 1) z = [[[ 1.] [ 2.]]] [[[ 3.] [ 4.]]] [[[ 5.] [ 6.]]] z[0].shape = (1, 2, 1)
squeeze_axis=1 removes the axis with length 1 from the shapes of the output arrays. Note that setting squeeze_axis to
1
removes axis with length 1 only along the axis which it is split. Also squeeze_axis can be set to true only ifinput.shape[axis] == num_outputs
.Example:
z = split(x, axis=0, num_outputs=3, squeeze_axis=1) // a list of 3 arrays with shape (2, 1) z = [[ 1.] [ 2.]] [[ 3.] [ 4.]] [[ 5.] [ 6.]] z[0].shape = (2 ,1 )
Defined in src/operator/slice_channel.cc:L106
- Parameters
data (Symbol) – The input
num_outputs (int, required) – Number of splits. Note that this should evenly divide the length of the axis.
axis (int, optional, default='1') – Axis along which to split.
squeeze_axis (boolean, optional, default=0) – If true, Removes the axis with length 1 from the shapes of the output arrays. Note that setting squeeze_axis to
true
removes axis with length 1 only along the axis which it is split. Also squeeze_axis can be set totrue
only ifinput.shape[axis] == num_outputs
.name (string, optional.) – Name of the resulting symbol.
- Returns
The result symbol.
- Return type
-
mxnet.symbol.
sqrt
(data=None, name=None, attr=None, out=None, **kwargs)¶ Returns element-wise square-root value of the input.
\[\textrm{sqrt}(x) = \sqrt{x}\]Example:
sqrt([4, 9, 16]) = [2, 3, 4]
The storage type of
sqrt
output depends upon the input storage type:sqrt(default) = default
sqrt(row_sparse) = row_sparse
sqrt(csr) = csr
Defined in src/operator/tensor/elemwise_unary_op_pow.cc:L170
-
mxnet.symbol.
square
(data=None, name=None, attr=None, out=None, **kwargs)¶ Returns element-wise squared value of the input.
\[square(x) = x^2\]Example:
square([2, 3, 4]) = [4, 9, 16]
The storage type of
square
output depends upon the input storage type:square(default) = default
square(row_sparse) = row_sparse
square(csr) = csr
Defined in src/operator/tensor/elemwise_unary_op_pow.cc:L119
-
mxnet.symbol.
squeeze
(data=None, axis=_Null, name=None, attr=None, out=None, **kwargs)¶ Remove single-dimensional entries from the shape of an array. Same behavior of defining the output tensor shape as numpy.squeeze for the most of cases. See the following note for exception. Examples:
data = [[[0], [1], [2]]] squeeze(data) = [0, 1, 2] squeeze(data, axis=0) = [[0], [1], [2]] squeeze(data, axis=2) = [[0, 1, 2]] squeeze(data, axis=(0, 2)) = [0, 1, 2]
Note
The output of this operator will keep at least one dimension not removed. For example, squeeze([[[4]]]) = [4], while in numpy.squeeze, the output will become a scalar.
- Parameters
data (Symbol) – data to squeeze
axis (Shape or None, optional, default=None) – Selects a subset of the single-dimensional entries in the shape. If an axis is selected with shape entry greater than one, an error is raised.
name (string, optional.) – Name of the resulting symbol.
- Returns
The result symbol.
- Return type
-
mxnet.symbol.
stack
(*data, **kwargs)¶ Join a sequence of arrays along a new axis. The axis parameter specifies the index of the new axis in the dimensions of the result. For example, if axis=0 it will be the first dimension and if axis=-1 it will be the last dimension. Examples:
x = [1, 2] y = [3, 4] stack(x, y) = [[1, 2], [3, 4]] stack(x, y, axis=1) = [[1, 3], [2, 4]]
This function support variable length of positional input.
-
mxnet.symbol.
stop_gradient
(data=None, name=None, attr=None, out=None, **kwargs)¶ Stops gradient computation.
Stops the accumulated gradient of the inputs from flowing through this operator in the backward direction. In other words, this operator prevents the contribution of its inputs to be taken into account for computing gradients.
Example:
v1 = [1, 2] v2 = [0, 1] a = Variable('a') b = Variable('b') b_stop_grad = stop_gradient(3 * b) loss = MakeLoss(b_stop_grad + a) executor = loss.simple_bind(ctx=cpu(), a=(1,2), b=(1,2)) executor.forward(is_train=True, a=v1, b=v2) executor.outputs [ 1. 5.] executor.backward() executor.grad_arrays [ 0. 0.] [ 1. 1.]
Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L325
-
mxnet.symbol.
sum
(data=None, axis=_Null, keepdims=_Null, exclude=_Null, name=None, attr=None, out=None, **kwargs)¶ Computes the sum of array elements over given axes.
Note
sum and sum_axis are equivalent. For ndarray of csr storage type summation along axis 0 and axis 1 is supported. Setting keepdims or exclude to True will cause a fallback to dense operator.
Example:
data = [[[1, 2], [2, 3], [1, 3]], [[1, 4], [4, 3], [5, 2]], [[7, 1], [7, 2], [7, 3]]] sum(data, axis=1) [[ 4. 8.] [ 10. 9.] [ 21. 6.]] sum(data, axis=[1,2]) [ 12. 19. 27.] data = [[1, 2, 0], [3, 0, 1], [4, 1, 0]] csr = cast_storage(data, 'csr') sum(csr, axis=0) [ 8. 3. 1.] sum(csr, axis=1) [ 3. 4. 5.]
Defined in src/operator/tensor/broadcast_reduce_sum_value.cc:L66
- Parameters
data (Symbol) – The input
axis (Shape or None, optional, default=None) –
The axis or axes along which to perform the reduction.
The default, axis=(), will compute over all elements into a scalar array with shape (1,).
If axis is int, a reduction is performed on a particular axis.
If axis is a tuple of ints, a reduction is performed on all the axes specified in the tuple.
If exclude is true, reduction will be performed on the axes that are NOT in axis instead.
Negative values means indexing from right to left.
keepdims (boolean, optional, default=0) – If this is set to True, the reduced axes are left in the result as dimension with size one.
exclude (boolean, optional, default=0) – Whether to perform reduction on axis that are NOT in axis instead.
name (string, optional.) – Name of the resulting symbol.
- Returns
The result symbol.
- Return type
-
mxnet.symbol.
sum_axis
(data=None, axis=_Null, keepdims=_Null, exclude=_Null, name=None, attr=None, out=None, **kwargs)¶ Computes the sum of array elements over given axes.
Note
sum and sum_axis are equivalent. For ndarray of csr storage type summation along axis 0 and axis 1 is supported. Setting keepdims or exclude to True will cause a fallback to dense operator.
Example:
data = [[[1, 2], [2, 3], [1, 3]], [[1, 4], [4, 3], [5, 2]], [[7, 1], [7, 2], [7, 3]]] sum(data, axis=1) [[ 4. 8.] [ 10. 9.] [ 21. 6.]] sum(data, axis=[1,2]) [ 12. 19. 27.] data = [[1, 2, 0], [3, 0, 1], [4, 1, 0]] csr = cast_storage(data, 'csr') sum(csr, axis=0) [ 8. 3. 1.] sum(csr, axis=1) [ 3. 4. 5.]
Defined in src/operator/tensor/broadcast_reduce_sum_value.cc:L66
- Parameters
data (Symbol) – The input
axis (Shape or None, optional, default=None) –
The axis or axes along which to perform the reduction.
The default, axis=(), will compute over all elements into a scalar array with shape (1,).
If axis is int, a reduction is performed on a particular axis.
If axis is a tuple of ints, a reduction is performed on all the axes specified in the tuple.
If exclude is true, reduction will be performed on the axes that are NOT in axis instead.
Negative values means indexing from right to left.
keepdims (boolean, optional, default=0) – If this is set to True, the reduced axes are left in the result as dimension with size one.
exclude (boolean, optional, default=0) – Whether to perform reduction on axis that are NOT in axis instead.
name (string, optional.) – Name of the resulting symbol.
- Returns
The result symbol.
- Return type
-
mxnet.symbol.
swapaxes
(data=None, dim1=_Null, dim2=_Null, name=None, attr=None, out=None, **kwargs)¶ Interchanges two axes of an array.
Examples:
x = [[1, 2, 3]]) swapaxes(x, 0, 1) = [[ 1], [ 2], [ 3]] x = [[[ 0, 1], [ 2, 3]], [[ 4, 5], [ 6, 7]]] // (2,2,2) array swapaxes(x, 0, 2) = [[[ 0, 4], [ 2, 6]], [[ 1, 5], [ 3, 7]]]
Defined in src/operator/swapaxis.cc:L69
-
mxnet.symbol.
take
(a=None, indices=None, axis=_Null, mode=_Null, name=None, attr=None, out=None, **kwargs)¶ Takes elements from an input array along the given axis.
This function slices the input array along a particular axis with the provided indices.
Given data tensor of rank r >= 1, and indices tensor of rank q, gather entries of the axis dimension of data (by default outer-most one as axis=0) indexed by indices, and concatenates them in an output tensor of rank q + (r - 1).
Examples:
x = [4. 5. 6.] // Trivial case, take the second element along the first axis. take(x, [1]) = [ 5. ] // The other trivial case, axis=-1, take the third element along the first axis take(x, [3], axis=-1, mode='clip') = [ 6. ] x = [[ 1., 2.], [ 3., 4.], [ 5., 6.]] // In this case we will get rows 0 and 1, then 1 and 2. Along axis 0 take(x, [[0,1],[1,2]]) = [[[ 1., 2.], [ 3., 4.]], [[ 3., 4.], [ 5., 6.]]] // In this case we will get rows 0 and 1, then 1 and 2 (calculated by wrapping around). // Along axis 1 take(x, [[0, 3], [-1, -2]], axis=1, mode='wrap') = [[[ 1. 2.] [ 2. 1.]] [[ 3. 4.] [ 4. 3.]] [[ 5. 6.] [ 6. 5.]]]
The storage type of
take
output depends upon the input storage type:take(default, default) = default
take(csr, default, axis=0) = csr
Defined in src/operator/tensor/indexing_op.cc:L776
- Parameters
a (Symbol) – The input array.
indices (Symbol) – The indices of the values to be extracted.
axis (int, optional, default='0') – The axis of input array to be taken.For input tensor of rank r, it could be in the range of [-r, r-1]
mode ({'clip', 'raise', 'wrap'},optional, default='clip') – Specify how out-of-bound indices bahave. Default is “clip”. “clip” means clip to the range. So, if all indices mentioned are too large, they are replaced by the index that addresses the last element along an axis. “wrap” means to wrap around. “raise” means to raise an error when index out of range.
name (string, optional.) – Name of the resulting symbol.
- Returns
The result symbol.
- Return type
-
mxnet.symbol.
tan
(data=None, name=None, attr=None, out=None, **kwargs)¶ Computes the element-wise tangent of the input array.
The input should be in radians (\(2\pi\) rad equals 360 degrees).
\[tan([0, \pi/4, \pi/2]) = [0, 1, -inf]\]The storage type of
tan
output depends upon the input storage type:tan(default) = default
tan(row_sparse) = row_sparse
tan(csr) = csr
Defined in src/operator/tensor/elemwise_unary_op_trig.cc:L140
-
mxnet.symbol.
tanh
(data=None, name=None, attr=None, out=None, **kwargs)¶ Returns the hyperbolic tangent of the input array, computed element-wise.
\[tanh(x) = sinh(x) / cosh(x)\]The storage type of
tanh
output depends upon the input storage type:tanh(default) = default
tanh(row_sparse) = row_sparse
tanh(csr) = csr
Defined in src/operator/tensor/elemwise_unary_op_trig.cc:L451
-
mxnet.symbol.
tile
(data=None, reps=_Null, name=None, attr=None, out=None, **kwargs)¶ Repeats the whole array multiple times. If
reps
has length d, and input array has dimension of n. There are three cases: - n=d. Repeat i-th dimension of the input byreps[i]
times:x = [[1, 2], [3, 4]] tile(x, reps=(2,3)) = [[ 1., 2., 1., 2., 1., 2.], [ 3., 4., 3., 4., 3., 4.], [ 1., 2., 1., 2., 1., 2.], [ 3., 4., 3., 4., 3., 4.]]
n>d.
reps
is promoted to length n by pre-pending 1’s to it. Thus for an input shape(2,3)
,repos=(2,)
is treated as(1,2)
:tile(x, reps=(2,)) = [[ 1., 2., 1., 2.], [ 3., 4., 3., 4.]]
n<d. The input is promoted to be d-dimensional by prepending new axes. So a shape
(2,2)
array is promoted to(1,2,2)
for 3-D replication:tile(x, reps=(2,2,3)) = [[[ 1., 2., 1., 2., 1., 2.], [ 3., 4., 3., 4., 3., 4.], [ 1., 2., 1., 2., 1., 2.], [ 3., 4., 3., 4., 3., 4.]], [[ 1., 2., 1., 2., 1., 2.], [ 3., 4., 3., 4., 3., 4.], [ 1., 2., 1., 2., 1., 2.], [ 3., 4., 3., 4., 3., 4.]]]
Defined in src/operator/tensor/matrix_op.cc:L795
- Parameters
data (Symbol) – Input data array
reps (Shape(tuple), required) – The number of times for repeating the tensor a. Each dim size of reps must be a positive integer. If reps has length d, the result will have dimension of max(d, a.ndim); If a.ndim < d, a is promoted to be d-dimensional by prepending new axes. If a.ndim > d, reps is promoted to a.ndim by pre-pending 1’s to it.
name (string, optional.) – Name of the resulting symbol.
- Returns
The result symbol.
- Return type
-
mxnet.symbol.
topk
(data=None, axis=_Null, k=_Null, ret_typ=_Null, is_ascend=_Null, dtype=_Null, name=None, attr=None, out=None, **kwargs)¶ - Returns the indices of the top k elements in an input array along the given
axis (by default). If ret_type is set to ‘value’ returns the value of top k elements (instead of indices). In case of ret_type = ‘both’, both value and index would be returned. The returned elements will be sorted.
Examples:
x = [[ 0.3, 0.2, 0.4], [ 0.1, 0.3, 0.2]] // returns an index of the largest element on last axis topk(x) = [[ 2.], [ 1.]] // returns the value of top-2 largest elements on last axis topk(x, ret_typ='value', k=2) = [[ 0.4, 0.3], [ 0.3, 0.2]] // returns the value of top-2 smallest elements on last axis topk(x, ret_typ='value', k=2, is_ascend=1) = [[ 0.2 , 0.3], [ 0.1 , 0.2]] // returns the value of top-2 largest elements on axis 0 topk(x, axis=0, ret_typ='value', k=2) = [[ 0.3, 0.3, 0.4], [ 0.1, 0.2, 0.2]] // flattens and then returns list of both values and indices topk(x, ret_typ='both', k=2) = [[[ 0.4, 0.3], [ 0.3, 0.2]] , [[ 2., 0.], [ 1., 2.]]]
Defined in src/operator/tensor/ordering_op.cc:L67
- Parameters
data (Symbol) – The input array
axis (int or None, optional, default='-1') – Axis along which to choose the top k indices. If not given, the flattened array is used. Default is -1.
k (int, optional, default='1') – Number of top elements to select, should be always smaller than or equal to the element number in the given axis. A global sort is performed if set k < 1.
ret_typ ({'both', 'indices', 'mask', 'value'},optional, default='indices') – The return type. “value” means to return the top k values, “indices” means to return the indices of the top k values, “mask” means to return a mask array containing 0 and 1. 1 means the top k values. “both” means to return a list of both values and indices of top k elements.
is_ascend (boolean, optional, default=0) – Whether to choose k largest or k smallest elements. Top K largest elements will be chosen if set to false.
dtype ({'float16', 'float32', 'float64', 'int32', 'int64', 'uint8'},optional, default='float32') – DType of the output indices when ret_typ is “indices” or “both”. An error will be raised if the selected data type cannot precisely represent the indices.
name (string, optional.) – Name of the resulting symbol.
- Returns
The result symbol.
- Return type
-
mxnet.symbol.
transpose
(data=None, axes=_Null, name=None, attr=None, out=None, **kwargs)¶ Permutes the dimensions of an array. Examples:
x = [[ 1, 2], [ 3, 4]] transpose(x) = [[ 1., 3.], [ 2., 4.]] x = [[[ 1., 2.], [ 3., 4.]], [[ 5., 6.], [ 7., 8.]]] transpose(x) = [[[ 1., 5.], [ 3., 7.]], [[ 2., 6.], [ 4., 8.]]] transpose(x, axes=(1,0,2)) = [[[ 1., 2.], [ 5., 6.]], [[ 3., 4.], [ 7., 8.]]]
Defined in src/operator/tensor/matrix_op.cc:L327
-
mxnet.symbol.
trunc
(data=None, name=None, attr=None, out=None, **kwargs)¶ Return the element-wise truncated value of the input.
The truncated value of the scalar x is the nearest integer i which is closer to zero than x is. In short, the fractional part of the signed number x is discarded.
Example:
trunc([-2.1, -1.9, 1.5, 1.9, 2.1]) = [-2., -1., 1., 1., 2.]
The storage type of
trunc
output depends upon the input storage type:trunc(default) = default
trunc(row_sparse) = row_sparse
trunc(csr) = csr
Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L856
-
mxnet.symbol.
uniform
(low=_Null, high=_Null, shape=_Null, ctx=_Null, dtype=_Null, name=None, attr=None, out=None, **kwargs)¶ Draw random samples from a uniform distribution.
Note
The existing alias
uniform
is deprecated.Samples are uniformly distributed over the half-open interval [low, high) (includes low, but excludes high).
Example:
uniform(low=0, high=1, shape=(2,2)) = [[ 0.60276335, 0.85794562], [ 0.54488319, 0.84725171]]
Defined in src/operator/random/sample_op.cc:L95
- Parameters
low (float, optional, default=0) – Lower bound of the distribution.
high (float, optional, default=1) – Upper bound of the distribution.
shape (Shape(tuple), optional, default=None) – Shape of the output.
ctx (string, optional, default='') – Context of output, in format [cpu|gpu|cpu_pinned](n). Only used for imperative calls.
dtype ({'None', 'float16', 'float32', 'float64'},optional, default='None') – DType of the output in case this can’t be inferred. Defaults to float32 if not defined (dtype=None).
name (string, optional.) – Name of the resulting symbol.
- Returns
The result symbol.
- Return type
-
mxnet.symbol.
unravel_index
(data=None, shape=_Null, name=None, attr=None, out=None, **kwargs)¶ Converts an array of flat indices into a batch of index arrays. The operator follows numpy conventions so a single multi index is given by a column of the output matrix. The leading dimension may be left unspecified by using -1 as placeholder.
Examples:
A = [22,41,37] unravel(A, shape=(7,6)) = [[3,6,6],[4,5,1]] unravel(A, shape=(-1,6)) = [[3,6,6],[4,5,1]]
Defined in src/operator/tensor/ravel.cc:L67
-
mxnet.symbol.
where
(condition=None, x=None, y=None, name=None, attr=None, out=None, **kwargs)¶ Return the elements, either from x or y, depending on the condition.
Given three ndarrays, condition, x, and y, return an ndarray with the elements from x or y, depending on the elements from condition are true or false. x and y must have the same shape. If condition has the same shape as x, each element in the output array is from x if the corresponding element in the condition is true, and from y if false.
If condition does not have the same shape as x, it must be a 1D array whose size is the same as x’s first dimension size. Each row of the output array is from x’s row if the corresponding element from condition is true, and from y’s row if false.
Note that all non-zero values are interpreted as
True
in condition.Examples:
x = [[1, 2], [3, 4]] y = [[5, 6], [7, 8]] cond = [[0, 1], [-1, 0]] where(cond, x, y) = [[5, 2], [3, 8]] csr_cond = cast_storage(cond, 'csr') where(csr_cond, x, y) = [[5, 2], [3, 8]]
Defined in src/operator/tensor/control_flow_op.cc:L56
-
mxnet.symbol.
zeros_like
(data=None, name=None, attr=None, out=None, **kwargs)¶ Return an array of zeros with the same shape, type and storage type as the input array.
The storage type of
zeros_like
output depends on the storage type of the inputzeros_like(row_sparse) = row_sparse
zeros_like(csr) = csr
zeros_like(default) = default
Examples:
x = [[ 1., 1., 1.], [ 1., 1., 1.]] zeros_like(x) = [[ 0., 0., 0.], [ 0., 0., 0.]]
-
class
mxnet.symbol.
Symbol
(handle)[source]¶ Bases:
mxnet._ctypes.symbol.SymbolBase
Symbol is symbolic graph of the mxnet.
Methods
abs
(*args, **kwargs)Convenience fluent method for
abs()
.arccos
(*args, **kwargs)Convenience fluent method for
arccos()
.arccosh
(*args, **kwargs)Convenience fluent method for
arccosh()
.arcsin
(*args, **kwargs)Convenience fluent method for
arcsin()
.arcsinh
(*args, **kwargs)Convenience fluent method for
arcsinh()
.arctan
(*args, **kwargs)Convenience fluent method for
arctan()
.arctanh
(*args, **kwargs)Convenience fluent method for
arctanh()
.argmax
(*args, **kwargs)Convenience fluent method for
argmax()
.argmax_channel
(*args, **kwargs)Convenience fluent method for
argmax_channel()
.argmin
(*args, **kwargs)Convenience fluent method for
argmin()
.argsort
(*args, **kwargs)Convenience fluent method for
argsort()
.Returns self.
Convert mx.sym.Symbol to mx.sym.np._Symbol.
astype
(*args, **kwargs)Convenience fluent method for
cast()
.attr
(key)Returns the attribute string for corresponding input key from the symbol.
Recursively gets all attributes from the symbol and its children.
bind
(ctx, args[, args_grad, grad_req, …])Binds the current symbol to an executor and returns it.
broadcast_axes
(*args, **kwargs)Convenience fluent method for
broadcast_axes()
.broadcast_like
(*args, **kwargs)Convenience fluent method for
broadcast_like()
.broadcast_to
(*args, **kwargs)Convenience fluent method for
broadcast_to()
.cbrt
(*args, **kwargs)Convenience fluent method for
cbrt()
.ceil
(*args, **kwargs)Convenience fluent method for
ceil()
.clip
(*args, **kwargs)Convenience fluent method for
clip()
.cos
(*args, **kwargs)Convenience fluent method for
cos()
.cosh
(*args, **kwargs)Convenience fluent method for
cosh()
.Gets a debug string of symbol.
degrees
(*args, **kwargs)Convenience fluent method for
degrees()
.depth_to_space
(*args, **kwargs)Convenience fluent method for
depth_to_space()
.diag
([k])Convenience fluent method for
diag()
.eval
([ctx])Evaluates a symbol given arguments.
exp
(*args, **kwargs)Convenience fluent method for
exp()
.expand_dims
(axis[, inplace])Convenience fluent method for
expand_dims()
.expm1
(*args, **kwargs)Convenience fluent method for
expm1()
.fix
(*args, **kwargs)Convenience fluent method for
fix()
.flatten
([inplace])Convenience fluent method for
flatten()
.flip
(*args, **kwargs)Convenience fluent method for
flip()
.floor
(*args, **kwargs)Convenience fluent method for
floor()
.get_backend_symbol
(backend)Return symbol for target backend.
Gets a new grouped symbol whose output contains inputs to output nodes of the original symbol.
Gets a new grouped symbol sgroup.
gradient
(wrt)Gets the autodiff of current symbol.
infer_shape
(*args, **kwargs)Infers the shapes of all arguments and all outputs given the known shapes of some arguments.
infer_shape_partial
(*args, **kwargs)Infers the shape partially.
infer_type
(*args, **kwargs)Infers the type of all arguments and all outputs, given the known types for some arguments.
infer_type_partial
(*args, **kwargs)Infers the type partially.
Lists all the arguments in the symbol.
list_attr
([recursive])Gets all attributes from the symbol.
Lists all the auxiliary states in the symbol.
Lists all arguments and auxiliary states of this Symbol.
Lists all the outputs in the symbol.
log
(*args, **kwargs)Convenience fluent method for
log()
.log10
(*args, **kwargs)Convenience fluent method for
log10()
.log1p
(*args, **kwargs)Convenience fluent method for
log1p()
.log2
(*args, **kwargs)Convenience fluent method for
log2()
.log_softmax
(*args, **kwargs)Convenience fluent method for
log_softmax()
.max
(*args, **kwargs)Convenience fluent method for
max()
.mean
(*args, **kwargs)Convenience fluent method for
mean()
.min
(*args, **kwargs)Convenience fluent method for
min()
.nanprod
(*args, **kwargs)Convenience fluent method for
nanprod()
.nansum
(*args, **kwargs)Convenience fluent method for
nansum()
.norm
(*args, **kwargs)Convenience fluent method for
norm()
.one_hot
(*args, **kwargs)Convenience fluent method for
one_hot()
.ones_like
(*args, **kwargs)Convenience fluent method for
ones_like()
.optimize_for
(backend[, args, aux, ctx, …])Partitions current symbol and optimizes it for a given backend, returns new partitioned symbol.
pad
(*args, **kwargs)Convenience fluent method for
pad()
.pick
(*args, **kwargs)Convenience fluent method for
pick()
.prod
(*args, **kwargs)Convenience fluent method for
prod()
.radians
(*args, **kwargs)Convenience fluent method for
radians()
.rcbrt
(*args, **kwargs)Convenience fluent method for
rcbrt()
.reciprocal
(*args, **kwargs)Convenience fluent method for
reciprocal()
.relu
(*args, **kwargs)Convenience fluent method for
relu()
.repeat
(*args, **kwargs)Convenience fluent method for
repeat()
.reshape
(*args, **kwargs)Convenience fluent method for
reshape()
.reshape_like
(*args, **kwargs)Convenience fluent method for
reshape_like()
.rint
(*args, **kwargs)Convenience fluent method for
rint()
.round
(*args, **kwargs)Convenience fluent method for
round()
.rsqrt
(*args, **kwargs)Convenience fluent method for
rsqrt()
.save
(fname[, remove_amp_cast])Saves symbol to a file.
shape_array
(*args, **kwargs)Convenience fluent method for
shape_array()
.sigmoid
(*args, **kwargs)Convenience fluent method for
sigmoid()
.sign
(*args, **kwargs)Convenience fluent method for
sign()
.simple_bind
(ctx[, grad_req, type_dict, …])Bind current symbol to get an executor, allocate all the arguments needed.
sin
(*args, **kwargs)Convenience fluent method for
sin()
.sinh
(*args, **kwargs)Convenience fluent method for
sinh()
.size_array
(*args, **kwargs)Convenience fluent method for
size_array()
.slice
(*args, **kwargs)Convenience fluent method for
slice()
.slice_axis
(*args, **kwargs)Convenience fluent method for
slice_axis()
.slice_like
(*args, **kwargs)Convenience fluent method for
slice_like()
.softmax
(*args, **kwargs)Convenience fluent method for
softmax()
.softmin
(*args, **kwargs)Convenience fluent method for
softmin()
.sort
(*args, **kwargs)Convenience fluent method for
sort()
.space_to_depth
(*args, **kwargs)Convenience fluent method for
space_to_depth()
.split
(*args, **kwargs)Convenience fluent method for
split()
.split_v2
(*args, **kwargs)Convenience fluent method for
split_v2()
.sqrt
(*args, **kwargs)Convenience fluent method for
sqrt()
.square
(*args, **kwargs)Convenience fluent method for
square()
.squeeze
([axis, inplace])Convenience fluent method for
squeeze()
.sum
(*args, **kwargs)Convenience fluent method for
sum()
.swapaxes
(*args, **kwargs)Convenience fluent method for
swapaxes()
.take
(*args, **kwargs)Convenience fluent method for
take()
.tan
(*args, **kwargs)Convenience fluent method for
tan()
.tanh
(*args, **kwargs)Convenience fluent method for
tanh()
.tile
(*args, **kwargs)Convenience fluent method for
tile()
.tojson
([remove_amp_cast])Saves symbol to a JSON string.
topk
(*args, **kwargs)Convenience fluent method for
topk()
.transpose
(*args, **kwargs)Convenience fluent method for
transpose()
.trunc
(*args, **kwargs)Convenience fluent method for
trunc()
.zeros_like
(*args, **kwargs)Convenience fluent method for
zeros_like()
.Attributes
Gets name string from the symbol, this function only works for non-grouped symbol.
-
abs
(*args, **kwargs)[source]¶ Convenience fluent method for
abs()
.The arguments are the same as for
abs()
, with this array as data.
-
arccos
(*args, **kwargs)[source]¶ Convenience fluent method for
arccos()
.The arguments are the same as for
arccos()
, with this array as data.
-
arccosh
(*args, **kwargs)[source]¶ Convenience fluent method for
arccosh()
.The arguments are the same as for
arccosh()
, with this array as data.
-
arcsin
(*args, **kwargs)[source]¶ Convenience fluent method for
arcsin()
.The arguments are the same as for
arcsin()
, with this array as data.
-
arcsinh
(*args, **kwargs)[source]¶ Convenience fluent method for
arcsinh()
.The arguments are the same as for
arcsinh()
, with this array as data.
-
arctan
(*args, **kwargs)[source]¶ Convenience fluent method for
arctan()
.The arguments are the same as for
arctan()
, with this array as data.
-
arctanh
(*args, **kwargs)[source]¶ Convenience fluent method for
arctanh()
.The arguments are the same as for
arctanh()
, with this array as data.
-
argmax
(*args, **kwargs)[source]¶ Convenience fluent method for
argmax()
.The arguments are the same as for
argmax()
, with this array as data.
-
argmax_channel
(*args, **kwargs)[source]¶ Convenience fluent method for
argmax_channel()
.The arguments are the same as for
argmax_channel()
, with this array as data.
-
argmin
(*args, **kwargs)[source]¶ Convenience fluent method for
argmin()
.The arguments are the same as for
argmin()
, with this array as data.
-
argsort
(*args, **kwargs)[source]¶ Convenience fluent method for
argsort()
.The arguments are the same as for
argsort()
, with this array as data.
-
as_nd_ndarray
()[source]¶ Returns self. For the convenience of conversion between legacy and np symbols.
-
astype
(*args, **kwargs)[source]¶ Convenience fluent method for
cast()
.The arguments are the same as for
cast()
, with this array as data.
-
attr
(key)[source]¶ Returns the attribute string for corresponding input key from the symbol.
This function only works for non-grouped symbols.
Example
>>> data = mx.sym.Variable('data', attr={'mood': 'angry'}) >>> data.attr('mood') 'angry'
- Parameters
key (str) – The key corresponding to the desired attribute.
- Returns
value – The desired attribute value, returns
None
if the attribute does not exist.- Return type
str
-
attr_dict
()[source]¶ Recursively gets all attributes from the symbol and its children.
Example
>>> a = mx.sym.Variable('a', attr={'a1':'a2'}) >>> b = mx.sym.Variable('b', attr={'b1':'b2'}) >>> c = a+b >>> c.attr_dict() {'a': {'a1': 'a2'}, 'b': {'b1': 'b2'}}
- Returns
ret – There is a key in the returned dict for every child with non-empty attribute set. For each symbol, the name of the symbol is its key in the dict and the correspond value is that symbol’s attribute list (itself a dictionary).
- Return type
Dict of str to dict
-
bind
(ctx, args, args_grad=None, grad_req='write', aux_states=None, group2ctx=None, shared_exec=None)[source]¶ Binds the current symbol to an executor and returns it.
We first declare the computation and then bind to the data to run. This function returns an executor which provides method forward() method for evaluation and a outputs() method to get all the results.
Example
>>> a = mx.sym.Variable('a') >>> b = mx.sym.Variable('b') >>> c = a + b <Symbol _plus1> >>> ex = c.bind(ctx=mx.cpu(), args={'a' : mx.nd.ones([2,3]), 'b' : mx.nd.ones([2,3])}) >>> ex.forward() [<NDArray 2x3 @cpu(0)>] >>> ex.outputs[0].asnumpy() [[ 2. 2. 2.] [ 2. 2. 2.]]
- Parameters
ctx (Context) – The device context the generated executor to run on.
args (list of NDArray or dict of str to NDArray) –
Input arguments to the symbol.
If the input type is a list of NDArray, the order should be same as the order of list_arguments().
If the input type is a dict of str to NDArray, then it maps the name of arguments to the corresponding NDArray.
In either case, all the arguments must be provided.
args_grad (list of NDArray or dict of str to NDArray, optional) –
When specified, args_grad provides NDArrays to hold the result of gradient value in backward.
If the input type is a list of NDArray, the order should be same as the order of list_arguments().
If the input type is a dict of str to NDArray, then it maps the name of arguments to the corresponding NDArray.
When the type is a dict of str to NDArray, one only need to provide the dict for required argument gradient. Only the specified argument gradient will be calculated.
grad_req ({'write', 'add', 'null'}, or list of str or dict of str to str, optional) –
To specify how we should update the gradient to the args_grad.
’write’ means everytime gradient is write to specified args_grad NDArray.
’add’ means everytime gradient is add to the specified NDArray.
’null’ means no action is taken, the gradient may not be calculated.
aux_states (list of NDArray, or dict of str to NDArray, optional) –
Input auxiliary states to the symbol, only needed when the output of list_auxiliary_states() is not empty.
If the input type is a list of NDArray, the order should be same as the order of list_auxiliary_states().
If the input type is a dict of str to NDArray, then it maps the name of auxiliary_states to the corresponding NDArray,
In either case, all the auxiliary states need to be provided.
group2ctx (Dict of string to mx.Context) – The dict mapping the ctx_group attribute to the context assignment.
shared_exec (mx.executor.Executor) – Executor to share memory with. This is intended for runtime reshaping, variable length sequences, etc. The returned executor shares state with shared_exec, and should not be used in parallel with it.
- Returns
executor – The generated executor
- Return type
Notes
Auxiliary states are the special states of symbols that do not correspond to an argument, and do not have gradient but are still useful for the specific operations. Common examples of auxiliary states include the moving_mean and moving_variance states in BatchNorm. Most operators do not have auxiliary states and in those cases, this parameter can be safely ignored.
One can give up gradient by using a dict in args_grad and only specify gradient they interested in.
-
broadcast_axes
(*args, **kwargs)[source]¶ Convenience fluent method for
broadcast_axes()
.The arguments are the same as for
broadcast_axes()
, with this array as data.
-
broadcast_like
(*args, **kwargs)[source]¶ Convenience fluent method for
broadcast_like()
.The arguments are the same as for
broadcast_like()
, with this array as data.
-
broadcast_to
(*args, **kwargs)[source]¶ Convenience fluent method for
broadcast_to()
.The arguments are the same as for
broadcast_to()
, with this array as data.
-
cbrt
(*args, **kwargs)[source]¶ Convenience fluent method for
cbrt()
.The arguments are the same as for
cbrt()
, with this array as data.
-
ceil
(*args, **kwargs)[source]¶ Convenience fluent method for
ceil()
.The arguments are the same as for
ceil()
, with this array as data.
-
clip
(*args, **kwargs)[source]¶ Convenience fluent method for
clip()
.The arguments are the same as for
clip()
, with this array as data.
-
cos
(*args, **kwargs)[source]¶ Convenience fluent method for
cos()
.The arguments are the same as for
cos()
, with this array as data.
-
cosh
(*args, **kwargs)[source]¶ Convenience fluent method for
cosh()
.The arguments are the same as for
cosh()
, with this array as data.
-
debug_str
()[source]¶ Gets a debug string of symbol.
It contains Symbol output, variables and operators in the computation graph with their inputs, variables and attributes.
- Returns
Debug string of the symbol.
- Return type
string
Examples
>>> a = mx.sym.Variable('a') >>> b = mx.sym.sin(a) >>> c = 2 * a + b >>> d = mx.sym.FullyConnected(data=c, num_hidden=10) >>> d.debug_str() >>> print d.debug_str() Symbol Outputs: output[0]=fullyconnected0(0) Variable:a -------------------- Op:_mul_scalar, Name=_mulscalar0 Inputs: arg[0]=a(0) version=0 Attrs: scalar=2 -------------------- Op:sin, Name=sin0 Inputs: arg[0]=a(0) version=0 -------------------- Op:elemwise_add, Name=_plus0 Inputs: arg[0]=_mulscalar0(0) arg[1]=sin0(0) Variable:fullyconnected0_weight Variable:fullyconnected0_bias -------------------- Op:FullyConnected, Name=fullyconnected0 Inputs: arg[0]=_plus0(0) arg[1]=fullyconnected0_weight(0) version=0 arg[2]=fullyconnected0_bias(0) version=0 Attrs: num_hidden=10
-
degrees
(*args, **kwargs)[source]¶ Convenience fluent method for
degrees()
.The arguments are the same as for
degrees()
, with this array as data.
-
depth_to_space
(*args, **kwargs)[source]¶ Convenience fluent method for
depth_to_space()
.The arguments are the same as for
depth_to_space()
, with this array as data.
-
diag
(k=0, **kwargs)[source]¶ Convenience fluent method for
diag()
.The arguments are the same as for
diag()
, with this array as data.
-
eval
(ctx=None, **kwargs)[source]¶ Evaluates a symbol given arguments.
The eval method combines a call to bind (which returns an executor) with a call to forward (executor method). For the common use case, where you might repeatedly evaluate with same arguments, eval is slow. In that case, you should call bind once and then repeatedly call forward. This function allows simpler syntax for less cumbersome introspection.
Example
>>> a = mx.sym.Variable('a') >>> b = mx.sym.Variable('b') >>> c = a + b >>> ex = c.eval(ctx = mx.cpu(), a = mx.nd.ones([2,3]), b = mx.nd.ones([2,3])) >>> ex [<NDArray 2x3 @cpu(0)>] >>> ex[0].asnumpy() array([[ 2., 2., 2.], [ 2., 2., 2.]], dtype=float32)
- Parameters
ctx (Context) – The device context the generated executor to run on.
kwargs (Keyword arguments of type NDArray) – Input arguments to the symbol. All the arguments must be provided.
- Returns
result (a list of NDArrays corresponding to the values taken by each symbol when)
evaluated on given args. When called on a single symbol (not a group),
the result will be a list with one element.
-
exp
(*args, **kwargs)[source]¶ Convenience fluent method for
exp()
.The arguments are the same as for
exp()
, with this array as data.
-
expand_dims
(axis, inplace=False, **kwargs)[source]¶ Convenience fluent method for
expand_dims()
.The arguments are the same as for
expand_dims()
, with this array as data.
-
expm1
(*args, **kwargs)[source]¶ Convenience fluent method for
expm1()
.The arguments are the same as for
expm1()
, with this array as data.
-
fix
(*args, **kwargs)[source]¶ Convenience fluent method for
fix()
.The arguments are the same as for
fix()
, with this array as data.
-
flatten
(inplace=False, **kwargs)[source]¶ Convenience fluent method for
flatten()
.The arguments are the same as for
flatten()
, with this array as data.
-
flip
(*args, **kwargs)[source]¶ Convenience fluent method for
flip()
.The arguments are the same as for
flip()
, with this array as data.
-
floor
(*args, **kwargs)[source]¶ Convenience fluent method for
floor()
.The arguments are the same as for
floor()
, with this array as data.
-
get_backend_symbol
(backend)[source]¶ Return symbol for target backend.
- Parameters
backend (str) – The backend names.
- Returns
out – The created Symbol for target backend.
- Return type
-
get_children
()[source]¶ Gets a new grouped symbol whose output contains inputs to output nodes of the original symbol.
Example
>>> x = mx.sym.Variable('x') >>> y = mx.sym.Variable('y') >>> z = mx.sym.Variable('z') >>> a = y+z >>> b = x+a >>> b.get_children() <Symbol Grouped> >>> b.get_children().list_outputs() ['x', '_plus10_output'] >>> b.get_children().get_children().list_outputs() ['y', 'z']
- Returns
sgroup – The children of the head node. If the symbol has no inputs then
None
will be returned.- Return type
Symbol or None
-
get_internals
()[source]¶ Gets a new grouped symbol sgroup. The output of sgroup is a list of outputs of all of the internal nodes.
Consider the following code:
Example
>>> a = mx.sym.var('a') >>> b = mx.sym.var('b') >>> c = a + b >>> d = c.get_internals() >>> d <Symbol Grouped> >>> d.list_outputs() ['a', 'b', '_plus4_output']
- Returns
sgroup – A symbol group containing all internal and leaf nodes of the computation graph used to compute the symbol.
- Return type
-
gradient
(wrt)[source]¶ Gets the autodiff of current symbol.
This function can only be used if current symbol is a loss function.
Note
This function is currently not implemented.
- Parameters
wrt (Array of String) – keyword arguments of the symbol that the gradients are taken.
- Returns
grad – A gradient Symbol with returns to be the corresponding gradients.
- Return type
-
infer_shape
(*args, **kwargs)[source]¶ Infers the shapes of all arguments and all outputs given the known shapes of some arguments.
This function takes the known shapes of some arguments in either positional way or keyword argument way as input. It returns a tuple of None values if there is not enough information to deduce the missing shapes.
Example
>>> a = mx.sym.var('a') >>> b = mx.sym.var('b') >>> c = a + b >>> arg_shapes, out_shapes, aux_shapes = c.infer_shape(a=(3,3)) >>> arg_shapes [(3L, 3L), (3L, 3L)] >>> out_shapes [(3L, 3L)] >>> aux_shapes [] >>> c.infer_shape(a=(0,3)) # 0s in shape means unknown dimensions. So, returns None. (None, None, None)
Inconsistencies in the known shapes will cause an error to be raised. See the following example:
>>> data = mx.sym.Variable('data') >>> out = mx.sym.FullyConnected(data=data, name='fc1', num_hidden=1000) >>> out = mx.sym.Activation(data=out, act_type='relu') >>> out = mx.sym.FullyConnected(data=out, name='fc2', num_hidden=10) >>> weight_shape= (1, 100) >>> data_shape = (100, 100) >>> out.infer_shape(data=data_shape, fc1_weight=weight_shape) Error in operator fc1: Shape inconsistent, Provided=(1,100), inferred shape=(1000,100)
- Parameters
*args – Shape of arguments in a positional way. Unknown shape can be marked as None.
**kwargs – Keyword arguments of the known shapes.
- Returns
arg_shapes (list of tuple or None) – List of argument shapes. The order is same as the order of list_arguments().
out_shapes (list of tuple or None) – List of output shapes. The order is same as the order of list_outputs().
aux_shapes (list of tuple or None) – List of auxiliary state shapes. The order is same as the order of list_auxiliary_states().
-
infer_shape_partial
(*args, **kwargs)[source]¶ Infers the shape partially.
This functions works the same way as infer_shape, except that this function can return partial results.
In the following example, information about fc2 is not available. So, infer_shape will return a tuple of None values but infer_shape_partial will return partial values.
Example
>>> data = mx.sym.Variable('data') >>> prev = mx.sym.Variable('prev') >>> fc1 = mx.sym.FullyConnected(data=data, name='fc1', num_hidden=128) >>> fc2 = mx.sym.FullyConnected(data=prev, name='fc2', num_hidden=128) >>> out = mx.sym.Activation(data=mx.sym.elemwise_add(fc1, fc2), act_type='relu') >>> out.list_arguments() ['data', 'fc1_weight', 'fc1_bias', 'prev', 'fc2_weight', 'fc2_bias'] >>> out.infer_shape(data=(10,64)) (None, None, None) >>> out.infer_shape_partial(data=(10,64)) ([(10L, 64L), (128L, 64L), (128L,), (), (), ()], [(10L, 128L)], []) >>> # infers shape if you give information about fc2 >>> out.infer_shape(data=(10,64), prev=(10,128)) ([(10L, 64L), (128L, 64L), (128L,), (10L, 128L), (128L, 128L), (128L,)], [(10L, 128L)], [])
- Parameters
*args – Shape of arguments in a positional way. Unknown shape can be marked as None
**kwargs – Keyword arguments of known shapes.
- Returns
arg_shapes (list of tuple or None) – List of argument shapes. The order is same as the order of list_arguments().
out_shapes (list of tuple or None) – List of output shapes. The order is same as the order of list_outputs().
aux_shapes (list of tuple or None) – List of auxiliary state shapes. The order is same as the order of list_auxiliary_states().
-
infer_type
(*args, **kwargs)[source]¶ Infers the type of all arguments and all outputs, given the known types for some arguments.
This function takes the known types of some arguments in either positional way or keyword argument way as input. It returns a tuple of None values if there is not enough information to deduce the missing types.
Inconsistencies in the known types will cause an error to be raised.
Example
>>> a = mx.sym.var('a') >>> b = mx.sym.var('b') >>> c = a + b >>> arg_types, out_types, aux_types = c.infer_type(a='float32') >>> arg_types [<type 'numpy.float32'>, <type 'numpy.float32'>] >>> out_types [<type 'numpy.float32'>] >>> aux_types []
- Parameters
*args – Type of known arguments in a positional way. Unknown type can be marked as None.
**kwargs – Keyword arguments of known types.
- Returns
arg_types (list of numpy.dtype or None) – List of argument types. The order is same as the order of list_arguments().
out_types (list of numpy.dtype or None) – List of output types. The order is same as the order of list_outputs().
aux_types (list of numpy.dtype or None) – List of auxiliary state types. The order is same as the order of list_auxiliary_states().
-
infer_type_partial
(*args, **kwargs)[source]¶ Infers the type partially.
This functions works the same way as infer_type, except that this function can return partial results.
In the following example, information about fc2 is not available. So, infer_shape will return a tuple of None values but infer_shape_partial will return partial values.
Example
>>> data = mx.sym.Variable('data') >>> prev = mx.sym.Variable('prev') >>> casted_prev = mx.sym.cast(prev, dtype='float32') >>> out = mx.sym.Activation(data=mx.sym.elemwise_add(data, casted_prev), act_type='relu') >>> out.list_arguments() ['data', 'prev'] >>> out.infer_type(data='float32') (None, None, None) >>> out.infer_type_partial(data='float32') ([numpy.float32, None], [numpy.float32], []) >>> # infers type if you give information about prev >>> out.infer_type(data='float32', prev='float16') ([numpy.float32, numpy.float16], [numpy.float32], [])
- Parameters
*args – Type of known arguments in a positional way. Unknown type can be marked as None.
**kwargs – Keyword arguments of known types.
- Returns
arg_types (list of numpy.dtype or None) – List of argument types. The order is same as the order of list_arguments().
out_types (list of numpy.dtype or None) – List of output types. The order is same as the order of list_outputs().
aux_types (list of numpy.dtype or None) – List of auxiliary state types. The order is same as the order of list_auxiliary_states().
-
list_arguments
()[source]¶ Lists all the arguments in the symbol.
Example
>>> a = mx.sym.var('a') >>> b = mx.sym.var('b') >>> c = a + b >>> c.list_arguments ['a', 'b']
- Returns
args – List containing the names of all the arguments required to compute the symbol.
- Return type
list of string
-
list_attr
(recursive=False)[source]¶ Gets all attributes from the symbol.
Example
>>> data = mx.sym.Variable('data', attr={'mood': 'angry'}) >>> data.list_attr() {'mood': 'angry'}
- Returns
ret – A dictionary mapping attribute keys to values.
- Return type
Dict of str to str
-
list_auxiliary_states
()[source]¶ Lists all the auxiliary states in the symbol.
Example
>>> a = mx.sym.var('a') >>> b = mx.sym.var('b') >>> c = a + b >>> c.list_auxiliary_states() []
Example of auxiliary states in BatchNorm.
>>> data = mx.symbol.Variable('data') >>> weight = mx.sym.Variable(name='fc1_weight') >>> fc1 = mx.symbol.FullyConnected(data = data, weight=weight, name='fc1', num_hidden=128) >>> fc2 = mx.symbol.BatchNorm(fc1, name='batchnorm0') >>> fc2.list_auxiliary_states() ['batchnorm0_moving_mean', 'batchnorm0_moving_var']
- Returns
aux_states – List of the auxiliary states in input symbol.
- Return type
list of str
Notes
Auxiliary states are special states of symbols that do not correspond to an argument, and are not updated by gradient descent. Common examples of auxiliary states include the moving_mean and moving_variance in BatchNorm. Most operators do not have auxiliary states.
-
list_inputs
()[source]¶ Lists all arguments and auxiliary states of this Symbol.
- Returns
inputs – List of all inputs.
- Return type
list of str
Examples
>>> bn = mx.sym.BatchNorm(name='bn') >>> bn.list_arguments() ['bn_data', 'bn_gamma', 'bn_beta'] >>> bn.list_auxiliary_states() ['bn_moving_mean', 'bn_moving_var'] >>> bn.list_inputs() ['bn_data', 'bn_gamma', 'bn_beta', 'bn_moving_mean', 'bn_moving_var']
-
list_outputs
()[source]¶ Lists all the outputs in the symbol.
Example
>>> a = mx.sym.var('a') >>> b = mx.sym.var('b') >>> c = a + b >>> c.list_outputs() ['_plus12_output']
- Returns
List of all the outputs. For most symbols, this list contains only the name of this symbol. For symbol groups, this is a list with the names of all symbols in the group.
- Return type
list of str
-
log
(*args, **kwargs)[source]¶ Convenience fluent method for
log()
.The arguments are the same as for
log()
, with this array as data.
-
log10
(*args, **kwargs)[source]¶ Convenience fluent method for
log10()
.The arguments are the same as for
log10()
, with this array as data.
-
log1p
(*args, **kwargs)[source]¶ Convenience fluent method for
log1p()
.The arguments are the same as for
log1p()
, with this array as data.
-
log2
(*args, **kwargs)[source]¶ Convenience fluent method for
log2()
.The arguments are the same as for
log2()
, with this array as data.
-
log_softmax
(*args, **kwargs)[source]¶ Convenience fluent method for
log_softmax()
.The arguments are the same as for
log_softmax()
, with this array as data.
-
max
(*args, **kwargs)[source]¶ Convenience fluent method for
max()
.The arguments are the same as for
max()
, with this array as data.
-
mean
(*args, **kwargs)[source]¶ Convenience fluent method for
mean()
.The arguments are the same as for
mean()
, with this array as data.
-
min
(*args, **kwargs)[source]¶ Convenience fluent method for
min()
.The arguments are the same as for
min()
, with this array as data.
-
property
name
¶ Gets name string from the symbol, this function only works for non-grouped symbol.
- Returns
value – The name of this symbol, returns
None
for grouped symbol.- Return type
str
-
nanprod
(*args, **kwargs)[source]¶ Convenience fluent method for
nanprod()
.The arguments are the same as for
nanprod()
, with this array as data.
-
nansum
(*args, **kwargs)[source]¶ Convenience fluent method for
nansum()
.The arguments are the same as for
nansum()
, with this array as data.
-
norm
(*args, **kwargs)[source]¶ Convenience fluent method for
norm()
.The arguments are the same as for
norm()
, with this array as data.
-
one_hot
(*args, **kwargs)[source]¶ Convenience fluent method for
one_hot()
.The arguments are the same as for
one_hot()
, with this array as data.
-
ones_like
(*args, **kwargs)[source]¶ Convenience fluent method for
ones_like()
.The arguments are the same as for
ones_like()
, with this array as data.
-
optimize_for
(backend, args=None, aux=None, ctx=None, shape_dict=None, type_dict=None, stype_dict=None, skip_infer=False, **kwargs)[source]¶ Partitions current symbol and optimizes it for a given backend, returns new partitioned symbol.
- Parameters
backend (str) – The name of backend, as registered in SubgraphBackendRegistry
args (dict of str to NDArray, optional) –
Input arguments to the symbol, required to infer shapes/types before partitioning - If type is a dict of str to NDArray, then it maps the name of arguments
to the corresponding NDArray. Non defined arguments’ `NDArray`s don’t have to be specified in the dict.
aux (dict of str to NDArray, optional) –
Input auxiliary arguments to the symbol - If type is a dict of str to NDArray, then it maps the name of arguments
to the corresponding NDArray.
ctx (Context, optional) – Device context, used to infer stypes
shape_dict (Dict of str->tuple, optional) – Input shape dictionary. Used iff input NDArray is not in args.
type_dict (Dict of str->numpy.dtype, optional) – Input type dictionary. Used iff input NDArray is not in args.
stype_dict (Dict of str->str, optional) – Input storage type dictionary. Used iff input NDArray is not in args.
skip_infer (bool, optional) – If True, the optimization skips the shape, type and storage type inference pass.
kwargs (optional arguments) – Passed on to PrePartition and PostPartition functions of SubgraphProperty
- Returns
out – The created symbol for target backend.
- Return type
SymbolHandle
-
pad
(*args, **kwargs)[source]¶ Convenience fluent method for
pad()
.The arguments are the same as for
pad()
, with this array as data.
-
pick
(*args, **kwargs)[source]¶ Convenience fluent method for
pick()
.The arguments are the same as for
pick()
, with this array as data.
-
prod
(*args, **kwargs)[source]¶ Convenience fluent method for
prod()
.The arguments are the same as for
prod()
, with this array as data.
-
radians
(*args, **kwargs)[source]¶ Convenience fluent method for
radians()
.The arguments are the same as for
radians()
, with this array as data.
-
rcbrt
(*args, **kwargs)[source]¶ Convenience fluent method for
rcbrt()
.The arguments are the same as for
rcbrt()
, with this array as data.
-
reciprocal
(*args, **kwargs)[source]¶ Convenience fluent method for
reciprocal()
.The arguments are the same as for
reciprocal()
, with this array as data.
-
relu
(*args, **kwargs)[source]¶ Convenience fluent method for
relu()
.The arguments are the same as for
relu()
, with this array as data.
-
repeat
(*args, **kwargs)[source]¶ Convenience fluent method for
repeat()
.The arguments are the same as for
repeat()
, with this array as data.
-
reshape
(*args, **kwargs)[source]¶ Convenience fluent method for
reshape()
.The arguments are the same as for
reshape()
, with this array as data.
-
reshape_like
(*args, **kwargs)[source]¶ Convenience fluent method for
reshape_like()
.The arguments are the same as for
reshape_like()
, with this array as data.
-
rint
(*args, **kwargs)[source]¶ Convenience fluent method for
rint()
.The arguments are the same as for
rint()
, with this array as data.
-
round
(*args, **kwargs)[source]¶ Convenience fluent method for
round()
.The arguments are the same as for
round()
, with this array as data.
-
rsqrt
(*args, **kwargs)[source]¶ Convenience fluent method for
rsqrt()
.The arguments are the same as for
rsqrt()
, with this array as data.
-
save
(fname, remove_amp_cast=True)[source]¶ Saves symbol to a file.
You can also use pickle to do the job if you only work on python. The advantage of load/save functions is that the file contents are language agnostic. This means the model saved by one language binding can be loaded by a different language binding of MXNet. You also get the benefit of being able to directly load/save from cloud storage(S3, HDFS).
- Parameters
fname (str) –
The name of the file.
”s3://my-bucket/path/my-s3-symbol”
”hdfs://my-bucket/path/my-hdfs-symbol”
”/path-to/my-local-symbol”
remove_amp_cast (bool, optional) – Whether to remove the amp_cast and amp_multicast operators, before saving the model.
See also
symbol.load()
Used to load symbol from file.
-
shape_array
(*args, **kwargs)[source]¶ Convenience fluent method for
shape_array()
.The arguments are the same as for
shape_op()
, with this array as data.
-
sigmoid
(*args, **kwargs)[source]¶ Convenience fluent method for
sigmoid()
.The arguments are the same as for
sigmoid()
, with this array as data.
-
sign
(*args, **kwargs)[source]¶ Convenience fluent method for
sign()
.The arguments are the same as for
sign()
, with this array as data.
-
simple_bind
(ctx, grad_req='write', type_dict=None, stype_dict=None, group2ctx=None, shared_arg_names=None, shared_exec=None, shared_buffer=None, **kwargs)[source]¶ Bind current symbol to get an executor, allocate all the arguments needed. Allows specifying data types.
This function simplifies the binding procedure. You need to specify only input data shapes. Before binding the executor, the function allocates arguments and auxiliary states that were not explicitly specified. Allows specifying data types.
Example
>>> x = mx.sym.Variable('x') >>> y = mx.sym.FullyConnected(x, num_hidden=4) >>> exe = y.simple_bind(mx.cpu(), x=(5,4), grad_req='null') >>> exe.forward() [<NDArray 5x4 @cpu(0)>] >>> exe.outputs[0].asnumpy() array([[ 0., 0., 0., 0.], [ 0., 0., 0., 0.], [ 0., 0., 0., 0.], [ 0., 0., 0., 0.], [ 0., 0., 0., 0.]], dtype=float32) >>> exe.arg_arrays [<NDArray 5x4 @cpu(0)>, <NDArray 4x4 @cpu(0)>, <NDArray 4 @cpu(0)>] >>> exe.grad_arrays [<NDArray 5x4 @cpu(0)>, <NDArray 4x4 @cpu(0)>, <NDArray 4 @cpu(0)>]
- Parameters
ctx (Context) – The device context the generated executor to run on.
grad_req (string) –
{‘write’, ‘add’, ‘null’}, or list of str or dict of str to str, optional To specify how we should update the gradient to the args_grad.
’write’ means every time gradient is written to specified args_grad NDArray.
’add’ means every time gradient is added to the specified NDArray.
’null’ means no action is taken, the gradient may not be calculated.
type_dict (Dict of str->numpy.dtype) – Input type dictionary, name->dtype
stype_dict (Dict of str->str) – Input storage type dictionary, name->storage_type
group2ctx (Dict of string to mx.Context) – The dict mapping the ctx_group attribute to the context assignment.
shared_arg_names (List of string) – The argument names whose NDArray of shared_exec can be reused for initializing the current executor.
shared_exec (Executor) – The executor whose arg_arrays, arg_arrays, grad_arrays, and aux_arrays can be reused for initializing the current executor.
shared_buffer (Dict of string to NDArray) – The dict mapping argument names to the NDArray that can be reused for initializing the current executor. This buffer will be checked for reuse if one argument name of the current executor is not found in shared_arg_names. The NDArray s are expected have default storage type.
kwargs (Dict of str->shape) – Input shape dictionary, name->shape
- Returns
executor – The generated executor
- Return type
mxnet.Executor
-
sin
(*args, **kwargs)[source]¶ Convenience fluent method for
sin()
.The arguments are the same as for
sin()
, with this array as data.
-
sinh
(*args, **kwargs)[source]¶ Convenience fluent method for
sinh()
.The arguments are the same as for
sinh()
, with this array as data.
-
size_array
(*args, **kwargs)[source]¶ Convenience fluent method for
size_array()
.The arguments are the same as for
size_array()
, with this array as data.
-
slice
(*args, **kwargs)[source]¶ Convenience fluent method for
slice()
.The arguments are the same as for
slice()
, with this array as data.
-
slice_axis
(*args, **kwargs)[source]¶ Convenience fluent method for
slice_axis()
.The arguments are the same as for
slice_axis()
, with this array as data.
-
slice_like
(*args, **kwargs)[source]¶ Convenience fluent method for
slice_like()
.The arguments are the same as for
slice_like()
, with this array as data.
-
softmax
(*args, **kwargs)[source]¶ Convenience fluent method for
softmax()
.The arguments are the same as for
softmax()
, with this array as data.
-
softmin
(*args, **kwargs)[source]¶ Convenience fluent method for
softmin()
.The arguments are the same as for
softmin()
, with this array as data.
-
sort
(*args, **kwargs)[source]¶ Convenience fluent method for
sort()
.The arguments are the same as for
sort()
, with this array as data.
-
space_to_depth
(*args, **kwargs)[source]¶ Convenience fluent method for
space_to_depth()
.The arguments are the same as for
space_to_depth()
, with this array as data.
-
split
(*args, **kwargs)[source]¶ Convenience fluent method for
split()
.The arguments are the same as for
split()
, with this array as data.
-
split_v2
(*args, **kwargs)[source]¶ Convenience fluent method for
split_v2()
.The arguments are the same as for
split_v2()
, with this array as data.
-
sqrt
(*args, **kwargs)[source]¶ Convenience fluent method for
sqrt()
.The arguments are the same as for
sqrt()
, with this array as data.
-
square
(*args, **kwargs)[source]¶ Convenience fluent method for
square()
.The arguments are the same as for
square()
, with this array as data.
-
squeeze
(axis=None, inplace=False, **kwargs)[source]¶ Convenience fluent method for
squeeze()
.The arguments are the same as for
squeeze()
, with this array as data.
-
sum
(*args, **kwargs)[source]¶ Convenience fluent method for
sum()
.The arguments are the same as for
sum()
, with this array as data.
-
swapaxes
(*args, **kwargs)[source]¶ Convenience fluent method for
swapaxes()
.The arguments are the same as for
swapaxes()
, with this array as data.
-
take
(*args, **kwargs)[source]¶ Convenience fluent method for
take()
.The arguments are the same as for
take()
, with this array as data.
-
tan
(*args, **kwargs)[source]¶ Convenience fluent method for
tan()
.The arguments are the same as for
tan()
, with this array as data.
-
tanh
(*args, **kwargs)[source]¶ Convenience fluent method for
tanh()
.The arguments are the same as for
tanh()
, with this array as data.
-
tile
(*args, **kwargs)[source]¶ Convenience fluent method for
tile()
.The arguments are the same as for
tile()
, with this array as data.
-
tojson
(remove_amp_cast=True)[source]¶ Saves symbol to a JSON string.
See also
symbol.load_json()
Used to load symbol from JSON string.
-
topk
(*args, **kwargs)[source]¶ Convenience fluent method for
topk()
.The arguments are the same as for
topk()
, with this array as data.
-
transpose
(*args, **kwargs)[source]¶ Convenience fluent method for
transpose()
.The arguments are the same as for
transpose()
, with this array as data.
-
trunc
(*args, **kwargs)[source]¶ Convenience fluent method for
trunc()
.The arguments are the same as for
trunc()
, with this array as data.
-
zeros_like
(*args, **kwargs)[source]¶ Convenience fluent method for
zeros_like()
.The arguments are the same as for
zeros_like()
, with this array as data.
-
-
mxnet.symbol.
var
(name, attr=None, shape=None, lr_mult=None, wd_mult=None, dtype=None, init=None, stype=None, **kwargs)[source]¶ Creates a symbolic variable with specified name.
Example
>>> data = mx.sym.Variable('data', attr={'a': 'b'}) >>> data <Symbol data> >>> csr_data = mx.sym.Variable('csr_data', stype='csr') >>> csr_data <Symbol csr_data> >>> row_sparse_weight = mx.sym.Variable('weight', stype='row_sparse') >>> row_sparse_weight <Symbol weight>
- Parameters
name (str) – Variable name.
attr (Dict of strings) – Additional attributes to set on the variable. Format {string : string}.
shape (tuple) – The shape of a variable. If specified, this will be used during the shape inference. If one has specified a different shape for this variable using a keyword argument when calling shape inference, this shape information will be ignored.
lr_mult (float) – The learning rate multiplier for input variable.
wd_mult (float) – Weight decay multiplier for input variable.
dtype (str or numpy.dtype) – The dtype for input variable. If not specified, this value will be inferred.
init (initializer (mxnet.init.*)) – Initializer for this variable to (optionally) override the default initializer.
stype (str) – The storage type of the variable, such as ‘row_sparse’, ‘csr’, ‘default’, etc
kwargs (Additional attribute variables) – Additional attributes must start and end with double underscores.
- Returns
variable – A symbol corresponding to an input to the computation graph.
- Return type
-
mxnet.symbol.
Variable
(name, attr=None, shape=None, lr_mult=None, wd_mult=None, dtype=None, init=None, stype=None, **kwargs)¶ Creates a symbolic variable with specified name.
Example
>>> data = mx.sym.Variable('data', attr={'a': 'b'}) >>> data <Symbol data> >>> csr_data = mx.sym.Variable('csr_data', stype='csr') >>> csr_data <Symbol csr_data> >>> row_sparse_weight = mx.sym.Variable('weight', stype='row_sparse') >>> row_sparse_weight <Symbol weight>
- Parameters
name (str) – Variable name.
attr (Dict of strings) – Additional attributes to set on the variable. Format {string : string}.
shape (tuple) – The shape of a variable. If specified, this will be used during the shape inference. If one has specified a different shape for this variable using a keyword argument when calling shape inference, this shape information will be ignored.
lr_mult (float) – The learning rate multiplier for input variable.
wd_mult (float) – Weight decay multiplier for input variable.
dtype (str or numpy.dtype) – The dtype for input variable. If not specified, this value will be inferred.
init (initializer (mxnet.init.*)) – Initializer for this variable to (optionally) override the default initializer.
stype (str) – The storage type of the variable, such as ‘row_sparse’, ‘csr’, ‘default’, etc
kwargs (Additional attribute variables) – Additional attributes must start and end with double underscores.
- Returns
variable – A symbol corresponding to an input to the computation graph.
- Return type
-
mxnet.symbol.
Group
(symbols, create_fn=<class 'mxnet.symbol.symbol.Symbol'>)[source]¶ Creates a symbol that contains a collection of other symbols, grouped together. A classic symbol (mx.sym.Symbol) will be returned if all the symbols in the list are of that type; a numpy symbol (mx.sym.np._Symbol) will be returned if all the symbols in the list are of that type. A type error will be raised if a list of mixed classic and numpy symbols are provided.
Example
>>> a = mx.sym.Variable('a') >>> b = mx.sym.Variable('b') >>> mx.sym.Group([a,b]) <Symbol Grouped>
- Parameters
symbols (list) – List of symbols to be grouped.
create_fn (mx.sym.Symbol or mx.sym.np._Symbol) – Symbol class for creating the grouped symbol.
- Returns
sym – A group symbol.
- Return type
-
mxnet.symbol.
load
(fname)[source]¶ Loads symbol from a JSON file.
You can also use pickle to do the job if you only work on python. The advantage of load/save is the file is language agnostic. This means the file saved using save can be loaded by other language binding of mxnet. You also get the benefit being able to directly load/save from cloud storage(S3, HDFS).
- Parameters
fname (str) –
The name of the file, examples:
s3://my-bucket/path/my-s3-symbol
hdfs://my-bucket/path/my-hdfs-symbol
/path-to/my-local-symbol
- Returns
sym – The loaded symbol.
- Return type
See also
Symbol.save()
Used to save symbol into file.
-
mxnet.symbol.
load_json
(json_str)[source]¶ Loads symbol from json string.
- Parameters
json_str (str) – A JSON string.
- Returns
sym – The loaded symbol.
- Return type
See also
Symbol.tojson()
Used to save symbol into json string.
-
mxnet.symbol.
pow
(base, exp)[source]¶ Returns element-wise result of base element raised to powers from exp element.
Both inputs can be Symbol or scalar number. Broadcasting is not supported. Use broadcast_pow instead.
sym.pow is being deprecated, please use sym.power instead.
- Parameters
- Returns
The bases in x raised to the exponents in y.
- Return type
Symbol or scalar
Examples
>>> mx.sym.pow(2, 3) 8 >>> x = mx.sym.Variable('x') >>> y = mx.sym.Variable('y') >>> z = mx.sym.pow(x, 2) >>> z.eval(x=mx.nd.array([1,2]))[0].asnumpy() array([ 1., 4.], dtype=float32) >>> z = mx.sym.pow(3, y) >>> z.eval(y=mx.nd.array([2,3]))[0].asnumpy() array([ 9., 27.], dtype=float32) >>> z = mx.sym.pow(x, y) >>> z.eval(x=mx.nd.array([3,4]), y=mx.nd.array([2,3]))[0].asnumpy() array([ 9., 64.], dtype=float32)
-
mxnet.symbol.
power
(base, exp)[source]¶ Returns element-wise result of base element raised to powers from exp element.
Both inputs can be Symbol or scalar number. Broadcasting is not supported. Use broadcast_pow instead.
- Parameters
- Returns
The bases in x raised to the exponents in y.
- Return type
Symbol or scalar
Examples
>>> mx.sym.power(2, 3) 8 >>> x = mx.sym.Variable('x') >>> y = mx.sym.Variable('y') >>> z = mx.sym.power(x, 2) >>> z.eval(x=mx.nd.array([1,2]))[0].asnumpy() array([ 1., 4.], dtype=float32) >>> z = mx.sym.power(3, y) >>> z.eval(y=mx.nd.array([2,3]))[0].asnumpy() array([ 9., 27.], dtype=float32) >>> z = mx.sym.power(x, y) >>> z.eval(x=mx.nd.array([3,4]), y=mx.nd.array([2,3]))[0].asnumpy() array([ 9., 64.], dtype=float32)
-
mxnet.symbol.
maximum
(left, right)[source]¶ Returns element-wise maximum of the input elements.
Both inputs can be Symbol or scalar number. Broadcasting is not supported.
- Parameters
- Returns
The element-wise maximum of the input symbols.
- Return type
Symbol or scalar
Examples
>>> mx.sym.maximum(2, 3.5) 3.5 >>> x = mx.sym.Variable('x') >>> y = mx.sym.Variable('y') >>> z = mx.sym.maximum(x, 4) >>> z.eval(x=mx.nd.array([3,5,2,10]))[0].asnumpy() array([ 4., 5., 4., 10.], dtype=float32) >>> z = mx.sym.maximum(x, y) >>> z.eval(x=mx.nd.array([3,4]), y=mx.nd.array([10,2]))[0].asnumpy() array([ 10., 4.], dtype=float32)
-
mxnet.symbol.
minimum
(left, right)[source]¶ Returns element-wise minimum of the input elements.
Both inputs can be Symbol or scalar number. Broadcasting is not supported.
- Parameters
- Returns
The element-wise minimum of the input symbols.
- Return type
Symbol or scalar
Examples
>>> mx.sym.minimum(2, 3.5) 2 >>> x = mx.sym.Variable('x') >>> y = mx.sym.Variable('y') >>> z = mx.sym.minimum(x, 4) >>> z.eval(x=mx.nd.array([3,5,2,10]))[0].asnumpy() array([ 3., 4., 2., 4.], dtype=float32) >>> z = mx.sym.minimum(x, y) >>> z.eval(x=mx.nd.array([3,4]), y=mx.nd.array([10,2]))[0].asnumpy() array([ 3., 2.], dtype=float32)
-
mxnet.symbol.
hypot
(left, right)[source]¶ Given the “legs” of a right triangle, returns its hypotenuse.
Equivalent to \(\sqrt(left^2 + right^2)\), element-wise. Both inputs can be Symbol or scalar number. Broadcasting is not supported.
- Parameters
- Returns
The hypotenuse of the triangle(s)
- Return type
Symbol or scalar
Examples
>>> mx.sym.hypot(3, 4) 5.0 >>> x = mx.sym.Variable('x') >>> y = mx.sym.Variable('y') >>> z = mx.sym.hypot(x, 4) >>> z.eval(x=mx.nd.array([3,5,2]))[0].asnumpy() array([ 5., 6.40312433, 4.47213602], dtype=float32) >>> z = mx.sym.hypot(x, y) >>> z.eval(x=mx.nd.array([3,4]), y=mx.nd.array([10,2]))[0].asnumpy() array([ 10.44030666, 4.47213602], dtype=float32)
-
mxnet.symbol.
eye
(N, M=0, k=0, dtype=None, **kwargs)[source]¶ Returns a new symbol of 2-D shpae, filled with ones on the diagonal and zeros elsewhere.
- Parameters
N (int) – Number of rows in the output.
M (int, optional) – Number of columns in the output. If 0, defaults to N.
k (int, optional) – Index of the diagonal: 0 (the default) refers to the main diagonal, a positive value refers to an upper diagonal, and a negative value to a lower diagonal.
dtype (str or numpy.dtype, optional) – The value type of the inner value, default to
np.float32
.
- Returns
out – The created Symbol.
- Return type
-
mxnet.symbol.
zeros
(shape, dtype=None, **kwargs)[source]¶ Returns a new symbol of given shape and type, filled with zeros.
- Parameters
shape (int or sequence of ints) – Shape of the new array.
dtype (str or numpy.dtype, optional) – The value type of the inner value, default to
np.float32
.
- Returns
out – The created Symbol.
- Return type
-
mxnet.symbol.
ones
(shape, dtype=None, **kwargs)[source]¶ Returns a new symbol of given shape and type, filled with ones.
- Parameters
shape (int or sequence of ints) – Shape of the new array.
dtype (str or numpy.dtype, optional) – The value type of the inner value, default to
np.float32
.
- Returns
out – The created Symbol
- Return type
-
mxnet.symbol.
full
(shape, val, dtype=None, **kwargs)[source]¶ Returns a new array of given shape and type, filled with the given value val.
- Parameters
shape (int or sequence of ints) – Shape of the new array.
val (scalar) – Fill value.
dtype (str or numpy.dtype, optional) – The value type of the inner value, default to
np.float32
.
- Returns
out – The created Symbol
- Return type
-
mxnet.symbol.
arange
(start, stop=None, step=1.0, repeat=1, infer_range=False, name=None, dtype=None)[source]¶ Returns evenly spaced values within a given interval.
Values are generated within the half-open interval [start, stop). In other words, the interval includes start but excludes stop. The function is similar to the built-in Python function range and to numpy.arange, but returns a Symbol.
- Parameters
start (number, optional) – Start of interval. The interval includes this value. The default start value is 0.
stop (number) – End of interval. The interval does not include this value.
step (number, optional) – Spacing between values.
repeat (int, optional) – “The repeating time of all elements. E.g repeat=3, the element a will be repeated three times –> a, a, a.
infer_range (boolean, optional) – When set to True, infer the stop position from the start, step, repeat, and output tensor size.
dtype (str or numpy.dtype, optional) – The value type of the inner value, default to
np.float32
.
- Returns
out – The created Symbol
- Return type
-
mxnet.symbol.
linspace
(start, stop, num, endpoint=True, name=None, dtype=None)[source]¶ Return evenly spaced numbers within a specified interval.
Values are generated within the half-open interval [start, stop) or closed interval [start, stop] depending on whether endpoint is True or False. The function is similar to numpy.linspace, but returns a Symbol.
- Parameters
start (number) – Start of interval.
stop (number) – End of interval, unless endpoint is set to False. In that case, the sequence consists of all but the last of num + 1 evenly spaced samples, so that stop is excluded. Note that the step size changes when endpoint is False.
num (number) – Number of samples to generate. Must be non-negative.
endpoint (bool) – If True, stop is the last sample. Otherwise, it is not included. The default is True.
ctx (Context, optional) – Device context. Default context is the current default context.
dtype (str or numpy.dtype, optional) – The data type of the NDArray. The default datatype is np.float32.
- Returns
out – The created Symbol
- Return type
-
mxnet.symbol.
histogram
(a, bins=10, range=None, **kwargs)[source]¶ Compute the histogram of the input data.
- Parameters
a (NDArray) – Input data. The histogram is computed over the flattened array.
bins (int or sequence of scalars) – If bins is an int, it defines the number of equal-width bins in the given range (10, by default). If bins is a sequence, it defines the bin edges, including the rightmost edge, allowing for non-uniform bin widths.
range ((float, float), required if bins is an integer) – The lower and upper range of the bins. If not provided, range is simply (a.min(), a.max()). Values outside the range are ignored. The first element of the range must be less than or equal to the second. range affects the automatic bin computation as well, the range will be equally divided by the number of bins.
- Returns
out – The created Symbol
- Return type
-
mxnet.symbol.
split_v2
(ary, indices_or_sections, axis=0, squeeze_axis=False)[source]¶ Split an array into multiple sub-arrays.
- Parameters
ary (NDArray) – Array to be divided into sub-arrays.
indices_or_sections (int or tuple of ints) – If indices_or_sections is an integer, N, the array will be divided into N equal arrays along axis. If such a split is not possible, an error is raised. If indices_or_sections is a 1-D array of sorted integers, the entries indicate where along axis the array is split. For example,
[2, 3]
would, foraxis=0
, result in - ary[:2] - ary[2:3] - ary[3:] If an index exceeds the dimension of the array along axis, an empty sub-array is returned correspondingly.axis (int, optional) – The axis along which to split, default is 0.
squeeze_axis (boolean, optional) – Whether to squeeze the axis of sub-arrays or not, only useful when size of the sub-arrays are 1 on the axis. Default is False.
- Returns
out – The created Symbol
- Return type