Class

org.apache.mxnet.javaapi

NDArrayBase

Related Doc: package javaapi

Permalink

abstract class NDArrayBase extends AnyRef

Linear Supertypes
AnyRef, Any
Known Subclasses
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. NDArrayBase
  2. AnyRef
  3. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. All

Instance Constructors

  1. new NDArrayBase()

    Permalink

Abstract Value Members

  1. abstract def Activation(data: NDArray, act_type: String, out: NDArray): Array[NDArray]

    Permalink

    Applies an activation function element-wise to the input.
    
    The following activation functions are supported:
    
    - `relu`: Rectified Linear Unit, :math:`y = max(x, 0)`
    - `sigmoid`: :math:`y = \frac{1}{1 + exp(-x)}`
    - `tanh`: Hyperbolic tangent, :math:`y = \frac{exp(x) - exp(-x)}{exp(x) + exp(-x)}`
    - `softrelu`: Soft ReLU, or SoftPlus, :math:`y = log(1 + exp(x))`
    - `softsign`: :math:`y = \frac{x}{1 + abs(x)}`
    
    
    
    Defined in src/operator/nn/activation.cc:L164
    data

    The input array.

    act_type

    Activation function to be applied.

    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  2. abstract def BatchNorm(po: BatchNormParam): Array[NDArray]

    Permalink

    Batch normalization.
    
    Normalizes a data batch by mean and variance, and applies a scale ``gamma`` as
    well as offset ``beta``.
    
    Assume the input has more than one dimension and we normalize along axis 1.
    We first compute the mean and variance along this axis:
    
    .. math::
    
      data\_mean[i] = mean(data[:,i,:,...]) \\
      data\_var[i] = var(data[:,i,:,...])
    
    Then compute the normalized output, which has the same shape as input, as following:
    
    .. math::
    
      out[:,i,:,...] = \frac{data[:,i,:,...] - data\_mean[i]}{\sqrt{data\_var[i]+\epsilon}} * gamma[i] + beta[i]
    
    Both *mean* and *var* returns a scalar by treating the input as a vector.
    
    Assume the input has size *k* on axis 1, then both ``gamma`` and ``beta``
    have shape *(k,)*. If ``output_mean_var`` is set to be true, then outputs both ``data_mean`` and
    the inverse of ``data_var``, which are needed for the backward pass. Note that gradient of these
    two outputs are blocked.
    
    Besides the inputs and the outputs, this operator accepts two auxiliary
    states, ``moving_mean`` and ``moving_var``, which are *k*-length
    vectors. They are global statistics for the whole dataset, which are updated
    by::
    
      moving_mean = moving_mean * momentum + data_mean * (1 - momentum)
      moving_var = moving_var * momentum + data_var * (1 - momentum)
    
    If ``use_global_stats`` is set to be true, then ``moving_mean`` and
    ``moving_var`` are used instead of ``data_mean`` and ``data_var`` to compute
    the output. It is often used during inference.
    
    The parameter ``axis`` specifies which axis of the input shape denotes
    the 'channel' (separately normalized groups).  The default is 1.  Specifying -1 sets the channel
    axis to be the last item in the input shape.
    
    Both ``gamma`` and ``beta`` are learnable parameters. But if ``fix_gamma`` is true,
    then set ``gamma`` to 1 and its gradient to 0.
    
    .. Note::
      When ``fix_gamma`` is set to True, no sparse support is provided. If ``fix_gamma is`` set to False,
      the sparse tensors will fallback.
    
    
    
    Defined in src/operator/nn/batch_norm.cc:L608
    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  3. abstract def BatchNorm_v1(po: BatchNorm_v1Param): Array[NDArray]

    Permalink

    Batch normalization.
    
    This operator is DEPRECATED. Perform BatchNorm on the input.
    
    Normalizes a data batch by mean and variance, and applies a scale ``gamma`` as
    well as offset ``beta``.
    
    Assume the input has more than one dimension and we normalize along axis 1.
    We first compute the mean and variance along this axis:
    
    .. math::
    
      data\_mean[i] = mean(data[:,i,:,...]) \\
      data\_var[i] = var(data[:,i,:,...])
    
    Then compute the normalized output, which has the same shape as input, as following:
    
    .. math::
    
      out[:,i,:,...] = \frac{data[:,i,:,...] - data\_mean[i]}{\sqrt{data\_var[i]+\epsilon}} * gamma[i] + beta[i]
    
    Both *mean* and *var* returns a scalar by treating the input as a vector.
    
    Assume the input has size *k* on axis 1, then both ``gamma`` and ``beta``
    have shape *(k,)*. If ``output_mean_var`` is set to be true, then outputs both ``data_mean`` and
    ``data_var`` as well, which are needed for the backward pass.
    
    Besides the inputs and the outputs, this operator accepts two auxiliary
    states, ``moving_mean`` and ``moving_var``, which are *k*-length
    vectors. They are global statistics for the whole dataset, which are updated
    by::
    
      moving_mean = moving_mean * momentum + data_mean * (1 - momentum)
      moving_var = moving_var * momentum + data_var * (1 - momentum)
    
    If ``use_global_stats`` is set to be true, then ``moving_mean`` and
    ``moving_var`` are used instead of ``data_mean`` and ``data_var`` to compute
    the output. It is often used during inference.
    
    Both ``gamma`` and ``beta`` are learnable parameters. But if ``fix_gamma`` is true,
    then set ``gamma`` to 1 and its gradient to 0.
    
    There's no sparse support for this operator, and it will exhibit problematic behavior if used with
    sparse tensors.
    
    
    
    Defined in src/operator/batch_norm_v1.cc:L94
    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  4. abstract def BilinearSampler(data: NDArray, grid: NDArray, cudnn_off: Boolean, out: NDArray): Array[NDArray]

    Permalink

    Applies bilinear sampling to input feature map.
    
    Bilinear Sampling is the key of  [NIPS2015] \"Spatial Transformer Networks\". The usage of the operator is very similar to remap function in OpenCV,
    except that the operator has the backward pass.
    
    Given :math:`data` and :math:`grid`, then the output is computed by
    
    .. math::
      x_{src} = grid[batch, 0, y_{dst}, x_{dst}] \\
      y_{src} = grid[batch, 1, y_{dst}, x_{dst}] \\
      output[batch, channel, y_{dst}, x_{dst}] = G(data[batch, channel, y_{src}, x_{src})
    
    :math:`x_{dst}`, :math:`y_{dst}` enumerate all spatial locations in :math:`output`, and :math:`G()` denotes the bilinear interpolation kernel.
    The out-boundary points will be padded with zeros.The shape of the output will be (data.shape[0], data.shape[1], grid.shape[2], grid.shape[3]).
    
    The operator assumes that :math:`data` has 'NCHW' layout and :math:`grid` has been normalized to [-1, 1].
    
    BilinearSampler often cooperates with GridGenerator which generates sampling grids for BilinearSampler.
    GridGenerator supports two kinds of transformation: ``affine`` and ``warp``.
    If users want to design a CustomOp to manipulate :math:`grid`, please firstly refer to the code of GridGenerator.
    
    Example 1::
    
      ## Zoom out data two times
      data = array(`[ [`[ [1, 4, 3, 6],
                      [1, 8, 8, 9],
                      [0, 4, 1, 5],
                      [1, 0, 1, 3] ] ] ])
    
      affine_matrix = array(`[ [2, 0, 0],
                             [0, 2, 0] ])
    
      affine_matrix = reshape(affine_matrix, shape=(1, 6))
    
      grid = GridGenerator(data=affine_matrix, transform_type='affine', target_shape=(4, 4))
    
      out = BilinearSampler(data, grid)
    
      out
      `[ [`[ [ 0,   0,     0,   0],
         [ 0,   3.5,   6.5, 0],
         [ 0,   1.25,  2.5, 0],
         [ 0,   0,     0,   0] ] ]
    
    
    Example 2::
    
      ## shift data horizontally by -1 pixel
    
      data = array(`[ [`[ [1, 4, 3, 6],
                      [1, 8, 8, 9],
                      [0, 4, 1, 5],
                      [1, 0, 1, 3] ] ] ])
    
      warp_maxtrix = array(`[ [`[ [1, 1, 1, 1],
                              [1, 1, 1, 1],
                              [1, 1, 1, 1],
                              [1, 1, 1, 1] ],
                             `[ [0, 0, 0, 0],
                              [0, 0, 0, 0],
                              [0, 0, 0, 0],
                              [0, 0, 0, 0] ] ] ])
    
      grid = GridGenerator(data=warp_matrix, transform_type='warp')
      out = BilinearSampler(data, grid)
    
      out
      `[ [`[ [ 4,  3,  6,  0],
         [ 8,  8,  9,  0],
         [ 4,  1,  5,  0],
         [ 0,  1,  3,  0] ] ]
    
    
    Defined in src/operator/bilinear_sampler.cc:L255
    data

    Input data to the BilinearsamplerOp.

    grid

    Input grid to the BilinearsamplerOp.grid has two channels: x_src, y_src

    cudnn_off

    whether to turn cudnn off

    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  5. abstract def BlockGrad(data: NDArray, out: NDArray): Array[NDArray]

    Permalink

    Stops gradient computation.
    
    Stops the accumulated gradient of the inputs from flowing through this operator
    in the backward direction. In other words, this operator prevents the contribution
    of its inputs to be taken into account for computing gradients.
    
    Example::
    
      v1 = [1, 2]
      v2 = [0, 1]
      a = Variable('a')
      b = Variable('b')
      b_stop_grad = stop_gradient(3 * b)
      loss = MakeLoss(b_stop_grad + a)
    
      executor = loss.simple_bind(ctx=cpu(), a=(1,2), b=(1,2))
      executor.forward(is_train=True, a=v1, b=v2)
      executor.outputs
      [ 1.  5.]
    
      executor.backward()
      executor.grad_arrays
      [ 0.  0.]
      [ 1.  1.]
    
    
    
    Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L325
    data

    The input array.

    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  6. abstract def CTCLoss(po: CTCLossParam): Array[NDArray]

    Permalink

    Connectionist Temporal Classification Loss.
    
    .. note:: The existing alias ``contrib_CTCLoss`` is deprecated.
    
    The shapes of the inputs and outputs:
    
    - **data**: `(sequence_length, batch_size, alphabet_size)`
    - **label**: `(batch_size, label_sequence_length)`
    - **out**: `(batch_size)`
    
    The `data` tensor consists of sequences of activation vectors (without applying softmax),
    with i-th channel in the last dimension corresponding to i-th label
    for i between 0 and alphabet_size-1 (i.e always 0-indexed).
    Alphabet size should include one additional value reserved for blank label.
    When `blank_label` is ``"first"``, the ``0``-th channel is be reserved for
    activation of blank label, or otherwise if it is "last", ``(alphabet_size-1)``-th channel should be
    reserved for blank label.
    
    ``label`` is an index matrix of integers. When `blank_label` is ``"first"``,
    the value 0 is then reserved for blank label, and should not be passed in this matrix. Otherwise,
    when `blank_label` is ``"last"``, the value `(alphabet_size-1)` is reserved for blank label.
    
    If a sequence of labels is shorter than *label_sequence_length*, use the special
    padding value at the end of the sequence to conform it to the correct
    length. The padding value is `0` when `blank_label` is ``"first"``, and `-1` otherwise.
    
    For example, suppose the vocabulary is `[a, b, c]`, and in one batch we have three sequences
    'ba', 'cbb', and 'abac'. When `blank_label` is ``"first"``, we can index the labels as
    `{'a': 1, 'b': 2, 'c': 3}`, and we reserve the 0-th channel for blank label in data tensor.
    The resulting `label` tensor should be padded to be::
    
      `[ [2, 1, 0, 0], [3, 2, 2, 0], [1, 2, 1, 3] ]
    
    When `blank_label` is ``"last"``, we can index the labels as
    `{'a': 0, 'b': 1, 'c': 2}`, and we reserve the channel index 3 for blank label in data tensor.
    The resulting `label` tensor should be padded to be::
    
      `[ [1, 0, -1, -1], [2, 1, 1, -1], [0, 1, 0, 2] ]
    
    ``out`` is a list of CTC loss values, one per example in the batch.
    
    See *Connectionist Temporal Classification: Labelling Unsegmented
    Sequence Data with Recurrent Neural Networks*, A. Graves *et al*. for more
    information on the definition and the algorithm.
    
    
    
    Defined in src/operator/nn/ctc_loss.cc:L100
    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  7. abstract def Convolution(po: ConvolutionParam): Array[NDArray]

    Permalink

    Compute *N*-D convolution on *(N+2)*-D input.
    
    In the 2-D convolution, given input data with shape *(batch_size,
    channel, height, width)*, the output is computed by
    
    .. math::
    
       out[n,i,:,:] = bias[i] + \sum_{j=0}^{channel} data[n,j,:,:] \star
       weight[i,j,:,:]
    
    where :math:`\star` is the 2-D cross-correlation operator.
    
    For general 2-D convolution, the shapes are
    
    - **data**: *(batch_size, channel, height, width)*
    - **weight**: *(num_filter, channel, kernel[0], kernel[1])*
    - **bias**: *(num_filter,)*
    - **out**: *(batch_size, num_filter, out_height, out_width)*.
    
    Define::
    
      f(x,k,p,s,d) = floor((x+2*p-d*(k-1)-1)/s)+1
    
    then we have::
    
      out_height=f(height, kernel[0], pad[0], stride[0], dilate[0])
      out_width=f(width, kernel[1], pad[1], stride[1], dilate[1])
    
    If ``no_bias`` is set to be true, then the ``bias`` term is ignored.
    
    The default data ``layout`` is *NCHW*, namely *(batch_size, channel, height,
    width)*. We can choose other layouts such as *NWC*.
    
    If ``num_group`` is larger than 1, denoted by *g*, then split the input ``data``
    evenly into *g* parts along the channel axis, and also evenly split ``weight``
    along the first dimension. Next compute the convolution on the *i*-th part of
    the data with the *i*-th weight part. The output is obtained by concatenating all
    the *g* results.
    
    1-D convolution does not have *height* dimension but only *width* in space.
    
    - **data**: *(batch_size, channel, width)*
    - **weight**: *(num_filter, channel, kernel[0])*
    - **bias**: *(num_filter,)*
    - **out**: *(batch_size, num_filter, out_width)*.
    
    3-D convolution adds an additional *depth* dimension besides *height* and
    *width*. The shapes are
    
    - **data**: *(batch_size, channel, depth, height, width)*
    - **weight**: *(num_filter, channel, kernel[0], kernel[1], kernel[2])*
    - **bias**: *(num_filter,)*
    - **out**: *(batch_size, num_filter, out_depth, out_height, out_width)*.
    
    Both ``weight`` and ``bias`` are learnable parameters.
    
    There are other options to tune the performance.
    
    - **cudnn_tune**: enable this option leads to higher startup time but may give
      faster speed. Options are
    
      - **off**: no tuning
      - **limited_workspace**:run test and pick the fastest algorithm that doesn't
        exceed workspace limit.
      - **fastest**: pick the fastest algorithm and ignore workspace limit.
      - **None** (default): the behavior is determined by environment variable
        ``MXNET_CUDNN_AUTOTUNE_DEFAULT``. 0 for off, 1 for limited workspace
        (default), 2 for fastest.
    
    - **workspace**: A large number leads to more (GPU) memory usage but may improve
      the performance.
    
    
    
    Defined in src/operator/nn/convolution.cc:L475
    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  8. abstract def Convolution_v1(po: Convolution_v1Param): Array[NDArray]

    Permalink

    This operator is DEPRECATED. Apply convolution to input then add a bias.
    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  9. abstract def Correlation(po: CorrelationParam): Array[NDArray]

    Permalink

    Applies correlation to inputs.
    
    The correlation layer performs multiplicative patch comparisons between two feature maps.
    
    Given two multi-channel feature maps :math:`f_{1}, f_{2}`, with :math:`w`, :math:`h`, and :math:`c` being their width, height, and number of channels,
    the correlation layer lets the network compare each patch from :math:`f_{1}` with each patch from :math:`f_{2}`.
    
    For now we consider only a single comparison of two patches. The 'correlation' of two patches centered at :math:`x_{1}` in the first map and
    :math:`x_{2}` in the second map is then defined as:
    
    .. math::
    
       c(x_{1}, x_{2}) = \sum_{o \in [-k,k] \times [-k,k]} <f_{1}(x_{1} + o), f_{2}(x_{2} + o)>
    
    for a square patch of size :math:`K:=2k+1`.
    
    Note that the equation above is identical to one step of a convolution in neural networks, but instead of convolving data with a filter, it convolves data with other
    data. For this reason, it has no training weights.
    
    Computing :math:`c(x_{1}, x_{2})` involves :math:`c * K^{2}` multiplications. Comparing all patch combinations involves :math:`w^{2}*h^{2}` such computations.
    
    Given a maximum displacement :math:`d`, for each location :math:`x_{1}` it computes correlations :math:`c(x_{1}, x_{2})` only in a neighborhood of size :math:`D:=2d+1`,
    by limiting the range of :math:`x_{2}`. We use strides :math:`s_{1}, s_{2}`, to quantize :math:`x_{1}` globally and to quantize :math:`x_{2}` within the neighborhood
    centered around :math:`x_{1}`.
    
    The final output is defined by the following expression:
    
    .. math::
      out[n, q, i, j] = c(x_{i, j}, x_{q})
    
    where :math:`i` and :math:`j` enumerate spatial locations in :math:`f_{1}`, and :math:`q` denotes the :math:`q^{th}` neighborhood of :math:`x_{i,j}`.
    
    
    Defined in src/operator/correlation.cc:L197
    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  10. abstract def Deconvolution(po: DeconvolutionParam): Array[NDArray]

    Permalink

    Computes 1D or 2D transposed convolution (aka fractionally strided convolution) of the input tensor. This operation can be seen as the gradient of Convolution operation with respect to its input. Convolution usually reduces the size of the input. Transposed convolution works the other way, going from a smaller input to a larger output while preserving the connectivity pattern.
    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  11. abstract def Dropout(po: DropoutParam): Array[NDArray]

    Permalink

    Applies dropout operation to input array.
    
    - During training, each element of the input is set to zero with probability p.
      The whole array is rescaled by :math:`1/(1-p)` to keep the expected
      sum of the input unchanged.
    
    - During testing, this operator does not change the input if mode is 'training'.
      If mode is 'always', the same computaion as during training will be applied.
    
    Example::
    
      random.seed(998)
      input_array = array(`[ [3., 0.5,  -0.5,  2., 7.],
                          [2., -0.4,   7.,  3., 0.2] ])
      a = symbol.Variable('a')
      dropout = symbol.Dropout(a, p = 0.2)
      executor = dropout.simple_bind(a = input_array.shape)
    
      ## If training
      executor.forward(is_train = True, a = input_array)
      executor.outputs
      `[ [ 3.75   0.625 -0.     2.5    8.75 ]
       [ 2.5   -0.5    8.75   3.75   0.   ] ]
    
      ## If testing
      executor.forward(is_train = False, a = input_array)
      executor.outputs
      `[ [ 3.     0.5   -0.5    2.     7.   ]
       [ 2.    -0.4    7.     3.     0.2  ] ]
    
    
    Defined in src/operator/nn/dropout.cc:L95
    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  12. abstract def ElementWiseSum(args: Array[NDArray], out: NDArray): Array[NDArray]

    Permalink

    Adds all input arguments element-wise.
    
    .. math::
       add\_n(a_1, a_2, ..., a_n) = a_1 + a_2 + ... + a_n
    
    ``add_n`` is potentially more efficient than calling ``add`` by `n` times.
    
    The storage type of ``add_n`` output depends on storage types of inputs
    
    - add_n(row_sparse, row_sparse, ..) = row_sparse
    - add_n(default, csr, default) = default
    - add_n(any input combinations longer than 4 (>4) with at least one default type) = default
    - otherwise, ``add_n`` falls all inputs back to default storage and generates default storage
    
    
    
    Defined in src/operator/tensor/elemwise_sum.cc:L155
    args

    Positional input arguments

    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  13. abstract def Embedding(po: EmbeddingParam): Array[NDArray]

    Permalink

    Maps integer indices to vector representations (embeddings).
    
    This operator maps words to real-valued vectors in a high-dimensional space,
    called word embeddings. These embeddings can capture semantic and syntactic properties of the words.
    For example, it has been noted that in the learned embedding spaces, similar words tend
    to be close to each other and dissimilar words far apart.
    
    For an input array of shape (d1, ..., dK),
    the shape of an output array is (d1, ..., dK, output_dim).
    All the input values should be integers in the range [0, input_dim).
    
    If the input_dim is ip0 and output_dim is op0, then shape of the embedding weight matrix must be
    (ip0, op0).
    
    When "sparse_grad" is False, if any index mentioned is too large, it is replaced by the index that
    addresses the last vector in an embedding matrix.
    When "sparse_grad" is True, an error will be raised if invalid indices are found.
    
    Examples::
    
      input_dim = 4
      output_dim = 5
    
      // Each row in weight matrix y represents a word. So, y = (w0,w1,w2,w3)
      y = `[ [  0.,   1.,   2.,   3.,   4.],
           [  5.,   6.,   7.,   8.,   9.],
           [ 10.,  11.,  12.,  13.,  14.],
           [ 15.,  16.,  17.,  18.,  19.] ]
    
      // Input array x represents n-grams(2-gram). So, x = [(w1,w3), (w0,w2)]
      x = `[ [ 1.,  3.],
           [ 0.,  2.] ]
    
      // Mapped input x to its vector representation y.
      Embedding(x, y, 4, 5) = `[ `[ [  5.,   6.,   7.,   8.,   9.],
                                [ 15.,  16.,  17.,  18.,  19.] ],
    
                               `[ [  0.,   1.,   2.,   3.,   4.],
                                [ 10.,  11.,  12.,  13.,  14.] ] ]
    
    
    The storage type of weight can be either row_sparse or default.
    
    .. Note::
    
        If "sparse_grad" is set to True, the storage type of gradient w.r.t weights will be
        "row_sparse". Only a subset of optimizers support sparse gradients, including SGD, AdaGrad
        and Adam. Note that by default lazy updates is turned on, which may perform differently
        from standard updates. For more details, please check the Optimization API at:
        https://mxnet.incubator.apache.org/api/python/optimization/optimization.html
    
    
    
    Defined in src/operator/tensor/indexing_op.cc:L597
    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  14. abstract def FullyConnected(po: FullyConnectedParam): Array[NDArray]

    Permalink

    Applies a linear transformation: :math:`Y = XW^T + b`.
    
    If ``flatten`` is set to be true, then the shapes are:
    
    - **data**: `(batch_size, x1, x2, ..., xn)`
    - **weight**: `(num_hidden, x1 * x2 * ... * xn)`
    - **bias**: `(num_hidden,)`
    - **out**: `(batch_size, num_hidden)`
    
    If ``flatten`` is set to be false, then the shapes are:
    
    - **data**: `(x1, x2, ..., xn, input_dim)`
    - **weight**: `(num_hidden, input_dim)`
    - **bias**: `(num_hidden,)`
    - **out**: `(x1, x2, ..., xn, num_hidden)`
    
    The learnable parameters include both ``weight`` and ``bias``.
    
    If ``no_bias`` is set to be true, then the ``bias`` term is ignored.
    
    .. Note::
    
        The sparse support for FullyConnected is limited to forward evaluation with `row_sparse`
        weight and bias, where the length of `weight.indices` and `bias.indices` must be equal
        to `num_hidden`. This could be useful for model inference with `row_sparse` weights
        trained with importance sampling or noise contrastive estimation.
    
        To compute linear transformation with 'csr' sparse data, sparse.dot is recommended instead
        of sparse.FullyConnected.
    
    
    
    Defined in src/operator/nn/fully_connected.cc:L286
    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  15. abstract def GridGenerator(data: NDArray, transform_type: String, target_shape: Shape, out: NDArray): Array[NDArray]

    Permalink

    Generates 2D sampling grid for bilinear sampling.
    data

    Input data to the function.

    transform_type

    The type of transformation. For affine, input data should be an affine matrix of size (batch, 6). For warp, input data should be an optical flow of size (batch, 2, h, w).

    target_shape

    Specifies the output shape (H, W). This is required if transformation type is affine. If transformation type is warp, this parameter is ignored.

    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  16. abstract def GroupNorm(po: GroupNormParam): Array[NDArray]

    Permalink

    Group normalization.
    
    The input channels are separated into ``num_groups`` groups, each containing ``num_channels / num_groups`` channels.
    The mean and standard-deviation are calculated separately over the each group.
    
    .. math::
    
      data = data.reshape((N, num_groups, C // num_groups, ...))
      out = \frac{data - mean(data, axis)}{\sqrt{var(data, axis) + \epsilon}} * gamma + beta
    
    Both ``gamma`` and ``beta`` are learnable parameters.
    
    
    
    Defined in src/operator/nn/group_norm.cc:L76
    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  17. abstract def IdentityAttachKLSparseReg(po: IdentityAttachKLSparseRegParam): Array[NDArray]

    Permalink

    Apply a sparse regularization to the output a sigmoid activation function.
    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  18. abstract def InstanceNorm(data: NDArray, gamma: NDArray, beta: NDArray, eps: Float, out: NDArray): Array[NDArray]

    Permalink

    Applies instance normalization to the n-dimensional input array.
    
    This operator takes an n-dimensional input array where (n>2) and normalizes
    the input using the following formula:
    
    .. math::
    
      out = \frac{x - mean[data]}{ \sqrt{Var[data]} + \epsilon} * gamma + beta
    
    This layer is similar to batch normalization layer (`BatchNorm`)
    with two differences: first, the normalization is
    carried out per example (instance), not over a batch. Second, the
    same normalization is applied both at test and train time. This
    operation is also known as `contrast normalization`.
    
    If the input data is of shape [batch, channel, spacial_dim1, spacial_dim2, ...],
    `gamma` and `beta` parameters must be vectors of shape [channel].
    
    This implementation is based on this paper [1]_
    
    .. [1] Instance Normalization: The Missing Ingredient for Fast Stylization,
       D. Ulyanov, A. Vedaldi, V. Lempitsky, 2016 (arXiv:1607.08022v2).
    
    Examples::
    
      // Input of shape (2,1,2)
      x = `[ `[ [ 1.1,  2.2] ],
           `[ [ 3.3,  4.4] ] ]
    
      // gamma parameter of length 1
      gamma = [1.5]
    
      // beta parameter of length 1
      beta = [0.5]
    
      // Instance normalization is calculated with the above formula
      InstanceNorm(x,gamma,beta) = `[ `[ [-0.997527  ,  1.99752665] ],
                                    `[ [-0.99752653,  1.99752724] ] ]
    
    
    
    Defined in src/operator/instance_norm.cc:L94
    data

    An n-dimensional input array (n > 2) of the form [batch, channel, spatial_dim1, spatial_dim2, ...].

    gamma

    A vector of length 'channel', which multiplies the normalized input.

    beta

    A vector of length 'channel', which is added to the product of the normalized input and the weight.

    eps

    An epsilon parameter to prevent division by 0.

    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  19. abstract def L2Normalization(po: L2NormalizationParam): Array[NDArray]

    Permalink

    Normalize the input array using the L2 norm.
    
    For 1-D NDArray, it computes::
    
      out = data / sqrt(sum(data ** 2) + eps)
    
    For N-D NDArray, if the input array has shape (N, N, ..., N),
    
    with ``mode`` = ``instance``, it normalizes each instance in the multidimensional
    array by its L2 norm.::
    
      for i in 0...N
        out[i,:,:,...,:] = data[i,:,:,...,:] / sqrt(sum(data[i,:,:,...,:] ** 2) + eps)
    
    with ``mode`` = ``channel``, it normalizes each channel in the array by its L2 norm.::
    
      for i in 0...N
        out[:,i,:,...,:] = data[:,i,:,...,:] / sqrt(sum(data[:,i,:,...,:] ** 2) + eps)
    
    with ``mode`` = ``spatial``, it normalizes the cross channel norm for each position
    in the array by its L2 norm.::
    
      for dim in 2...N
        for i in 0...N
          out[.....,i,...] = take(out, indices=i, axis=dim) / sqrt(sum(take(out, indices=i, axis=dim) ** 2) + eps)
              -dim-
    
    Example::
    
      x = `[ `[ [1,2],
            [3,4] ],
           `[ [2,2],
            [5,6] ] ]
    
      L2Normalization(x, mode='instance')
      =`[ `[ [ 0.18257418  0.36514837]
         [ 0.54772252  0.73029673] ]
        `[ [ 0.24077171  0.24077171]
         [ 0.60192931  0.72231513] ] ]
    
      L2Normalization(x, mode='channel')
      =`[ `[ [ 0.31622776  0.44721359]
         [ 0.94868326  0.89442718] ]
        `[ [ 0.37139067  0.31622776]
         [ 0.92847669  0.94868326] ] ]
    
      L2Normalization(x, mode='spatial')
      =`[ `[ [ 0.44721359  0.89442718]
         [ 0.60000002  0.80000001] ]
        `[ [ 0.70710677  0.70710677]
         [ 0.6401844   0.76822126] ] ]
    
    
    
    Defined in src/operator/l2_normalization.cc:L195
    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  20. abstract def LRN(po: LRNParam): Array[NDArray]

    Permalink

    Applies local response normalization to the input.
    
    The local response normalization layer performs "lateral inhibition" by normalizing
    over local input regions.
    
    If :math:`a_{x,y}^{i}` is the activity of a neuron computed by applying kernel :math:`i` at position
    :math:`(x, y)` and then applying the ReLU nonlinearity, the response-normalized
    activity :math:`b_{x,y}^{i}` is given by the expression:
    
    .. math::
       b_{x,y}^{i} = \frac{a_{x,y}^{i}}{\Bigg({k + \frac{\alpha}{n} \sum_{j=max(0, i-\frac{n}{2})}^{min(N-1, i+\frac{n}{2})} (a_{x,y}^{j})^{2}}\Bigg)^{\beta}}
    
    where the sum runs over :math:`n` "adjacent" kernel maps at the same spatial position, and :math:`N` is the total
    number of kernels in the layer.
    
    
    
    Defined in src/operator/nn/lrn.cc:L157
    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  21. abstract def LayerNorm(po: LayerNormParam): Array[NDArray]

    Permalink

    Layer normalization.
    
    Normalizes the channels of the input tensor by mean and variance, and applies a scale ``gamma`` as
    well as offset ``beta``.
    
    Assume the input has more than one dimension and we normalize along axis 1.
    We first compute the mean and variance along this axis and then
    compute the normalized output, which has the same shape as input, as following:
    
    .. math::
    
      out = \frac{data - mean(data, axis)}{\sqrt{var(data, axis) + \epsilon}} * gamma + beta
    
    Both ``gamma`` and ``beta`` are learnable parameters.
    
    Unlike BatchNorm and InstanceNorm,  the *mean* and *var* are computed along the channel dimension.
    
    Assume the input has size *k* on axis 1, then both ``gamma`` and ``beta``
    have shape *(k,)*. If ``output_mean_var`` is set to be true, then outputs both ``data_mean`` and
    ``data_std``. Note that no gradient will be passed through these two outputs.
    
    The parameter ``axis`` specifies which axis of the input shape denotes
    the 'channel' (separately normalized groups).  The default is -1, which sets the channel
    axis to be the last item in the input shape.
    
    
    
    Defined in src/operator/nn/layer_norm.cc:L201
    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  22. abstract def LeakyReLU(po: LeakyReLUParam): Array[NDArray]

    Permalink

    Applies Leaky rectified linear unit activation element-wise to the input.
    
    Leaky ReLUs attempt to fix the "dying ReLU" problem by allowing a small `slope`
    when the input is negative and has a slope of one when input is positive.
    
    The following modified ReLU Activation functions are supported:
    
    - *elu*: Exponential Linear Unit. `y = x > 0 ? x : slope * (exp(x)-1)`
    - *selu*: Scaled Exponential Linear Unit. `y = lambda * (x > 0 ? x : alpha * (exp(x) - 1))` where
      *lambda = 1.0507009873554804934193349852946* and *alpha = 1.6732632423543772848170429916717*.
    - *leaky*: Leaky ReLU. `y = x > 0 ? x : slope * x`
    - *prelu*: Parametric ReLU. This is same as *leaky* except that `slope` is learnt during training.
    - *rrelu*: Randomized ReLU. same as *leaky* but the `slope` is uniformly and randomly chosen from
      *[lower_bound, upper_bound)* for training, while fixed to be
      *(lower_bound+upper_bound)/2* for inference.
    
    
    
    Defined in src/operator/leaky_relu.cc:L162
    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  23. abstract def LinearRegressionOutput(data: NDArray, label: NDArray, grad_scale: Float, out: NDArray): Array[NDArray]

    Permalink

    Computes and optimizes for squared loss during backward propagation.
    Just outputs ``data`` during forward propagation.
    
    If :math:`\hat{y}_i` is the predicted value of the i-th sample, and :math:`y_i` is the corresponding target value,
    then the squared loss estimated over :math:`n` samples is defined as
    
    :math:`\text{SquaredLoss}(\textbf{Y}, \hat{\textbf{Y}} ) = \frac{1}{n} \sum_{i=0}^{n-1} \lVert  \textbf{y}_i - \hat{\textbf{y}}_i  \rVert_2`
    
    .. note::
       Use the LinearRegressionOutput as the final output layer of a net.
    
    The storage type of ``label`` can be ``default`` or ``csr``
    
    - LinearRegressionOutput(default, default) = default
    - LinearRegressionOutput(default, csr) = default
    
    By default, gradients of this loss function are scaled by factor `1/m`, where m is the number of regression outputs of a training example.
    The parameter `grad_scale` can be used to change this scale to `grad_scale/m`.
    
    
    
    Defined in src/operator/regression_output.cc:L92
    data

    Input data to the function.

    label

    Input label to the function.

    grad_scale

    Scale the gradient by a float factor

    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  24. abstract def LogisticRegressionOutput(data: NDArray, label: NDArray, grad_scale: Float, out: NDArray): Array[NDArray]

    Permalink

    Applies a logistic function to the input.
    
    The logistic function, also known as the sigmoid function, is computed as
    :math:`\frac{1}{1+exp(-\textbf{x})}`.
    
    Commonly, the sigmoid is used to squash the real-valued output of a linear model
    :math:`wTx+b` into the [0,1] range so that it can be interpreted as a probability.
    It is suitable for binary classification or probability prediction tasks.
    
    .. note::
       Use the LogisticRegressionOutput as the final output layer of a net.
    
    The storage type of ``label`` can be ``default`` or ``csr``
    
    - LogisticRegressionOutput(default, default) = default
    - LogisticRegressionOutput(default, csr) = default
    
    The loss function used is the Binary Cross Entropy Loss:
    
    :math:`-{(y\log(p) + (1 - y)\log(1 - p))}`
    
    Where `y` is the ground truth probability of positive outcome for a given example, and `p` the probability predicted by the model. By default, gradients of this loss function are scaled by factor `1/m`, where m is the number of regression outputs of a training example.
    The parameter `grad_scale` can be used to change this scale to `grad_scale/m`.
    
    
    
    Defined in src/operator/regression_output.cc:L152
    data

    Input data to the function.

    label

    Input label to the function.

    grad_scale

    Scale the gradient by a float factor

    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  25. abstract def MAERegressionOutput(data: NDArray, label: NDArray, grad_scale: Float, out: NDArray): Array[NDArray]

    Permalink

    Computes mean absolute error of the input.
    
    MAE is a risk metric corresponding to the expected value of the absolute error.
    
    If :math:`\hat{y}_i` is the predicted value of the i-th sample, and :math:`y_i` is the corresponding target value,
    then the mean absolute error (MAE) estimated over :math:`n` samples is defined as
    
    :math:`\text{MAE}(\textbf{Y}, \hat{\textbf{Y}} ) = \frac{1}{n} \sum_{i=0}^{n-1} \lVert \textbf{y}_i - \hat{\textbf{y}}_i \rVert_1`
    
    .. note::
       Use the MAERegressionOutput as the final output layer of a net.
    
    The storage type of ``label`` can be ``default`` or ``csr``
    
    - MAERegressionOutput(default, default) = default
    - MAERegressionOutput(default, csr) = default
    
    By default, gradients of this loss function are scaled by factor `1/m`, where m is the number of regression outputs of a training example.
    The parameter `grad_scale` can be used to change this scale to `grad_scale/m`.
    
    
    
    Defined in src/operator/regression_output.cc:L120
    data

    Input data to the function.

    label

    Input label to the function.

    grad_scale

    Scale the gradient by a float factor

    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  26. abstract def MakeLoss(po: MakeLossParam): Array[NDArray]

    Permalink

    Make your own loss function in network construction.
    
    This operator accepts a customized loss function symbol as a terminal loss and
    the symbol should be an operator with no backward dependency.
    The output of this function is the gradient of loss with respect to the input data.
    
    For example, if you are a making a cross entropy loss function. Assume ``out`` is the
    predicted output and ``label`` is the true label, then the cross entropy can be defined as::
    
      cross_entropy = label * log(out) + (1 - label) * log(1 - out)
      loss = MakeLoss(cross_entropy)
    
    We will need to use ``MakeLoss`` when we are creating our own loss function or we want to
    combine multiple loss functions. Also we may want to stop some variables' gradients
    from backpropagation. See more detail in ``BlockGrad`` or ``stop_gradient``.
    
    In addition, we can give a scale to the loss by setting ``grad_scale``,
    so that the gradient of the loss will be rescaled in the backpropagation.
    
    .. note:: This operator should be used as a Symbol instead of NDArray.
    
    
    
    Defined in src/operator/make_loss.cc:L70
    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  27. abstract def Pooling(po: PoolingParam): Array[NDArray]

    Permalink

    Performs pooling on the input.
    
    The shapes for 1-D pooling are
    
    - **data** and **out**: *(batch_size, channel, width)* (NCW layout) or
      *(batch_size, width, channel)* (NWC layout),
    
    The shapes for 2-D pooling are
    
    - **data** and **out**: *(batch_size, channel, height, width)* (NCHW layout) or
      *(batch_size, height, width, channel)* (NHWC layout),
    
        out_height = f(height, kernel[0], pad[0], stride[0])
        out_width = f(width, kernel[1], pad[1], stride[1])
    
    The definition of *f* depends on ``pooling_convention``, which has two options:
    
    - **valid** (default)::
    
        f(x, k, p, s) = floor((x+2*p-k)/s)+1
    
    - **full**, which is compatible with Caffe::
    
        f(x, k, p, s) = ceil((x+2*p-k)/s)+1
    
    When ``global_pool`` is set to be true, then global pooling is performed. It will reset
    ``kernel=(height, width)`` and set the appropiate padding to 0.
    
    Three pooling options are supported by ``pool_type``:
    
    - **avg**: average pooling
    - **max**: max pooling
    - **sum**: sum pooling
    - **lp**: Lp pooling
    
    For 3-D pooling, an additional *depth* dimension is added before
    *height*. Namely the input data and output will have shape *(batch_size, channel, depth,
    height, width)* (NCDHW layout) or *(batch_size, depth, height, width, channel)* (NDHWC layout).
    
    Notes on Lp pooling:
    
    Lp pooling was first introduced by this paper: https://arxiv.org/pdf/1204.3968.pdf.
    L-1 pooling is simply sum pooling, while L-inf pooling is simply max pooling.
    We can see that Lp pooling stands between those two, in practice the most common value for p is 2.
    
    For each window ``X``, the mathematical expression for Lp pooling is:
    
    :math:`f(X) = \sqrt[p]{\sum_{x}^{X} x^p}`
    
    
    
    Defined in src/operator/nn/pooling.cc:L416
    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  28. abstract def Pooling_v1(po: Pooling_v1Param): Array[NDArray]

    Permalink

    This operator is DEPRECATED.
    Perform pooling on the input.
    
    The shapes for 2-D pooling is
    
    - **data**: *(batch_size, channel, height, width)*
    - **out**: *(batch_size, num_filter, out_height, out_width)*, with::
    
        out_height = f(height, kernel[0], pad[0], stride[0])
        out_width = f(width, kernel[1], pad[1], stride[1])
    
    The definition of *f* depends on ``pooling_convention``, which has two options:
    
    - **valid** (default)::
    
        f(x, k, p, s) = floor((x+2*p-k)/s)+1
    
    - **full**, which is compatible with Caffe::
    
        f(x, k, p, s) = ceil((x+2*p-k)/s)+1
    
    But ``global_pool`` is set to be true, then do a global pooling, namely reset
    ``kernel=(height, width)``.
    
    Three pooling options are supported by ``pool_type``:
    
    - **avg**: average pooling
    - **max**: max pooling
    - **sum**: sum pooling
    
    1-D pooling is special case of 2-D pooling with *weight=1* and
    *kernel[1]=1*.
    
    For 3-D pooling, an additional *depth* dimension is added before
    *height*. Namely the input data will have shape *(batch_size, channel, depth,
    height, width)*.
    
    
    
    Defined in src/operator/pooling_v1.cc:L103
    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  29. abstract def RNN(po: RNNParam): Array[NDArray]

    Permalink

    Applies recurrent layers to input data. Currently, vanilla RNN, LSTM and GRU are
    implemented, with both multi-layer and bidirectional support.
    
    When the input data is of type float32 and the environment variables MXNET_CUDA_ALLOW_TENSOR_CORE
    and MXNET_CUDA_TENSOR_OP_MATH_ALLOW_CONVERSION are set to 1, this operator will try to use
    pseudo-float16 precision (float32 math with float16 I/O) precision in order to use
    Tensor Cores on suitable NVIDIA GPUs. This can sometimes give significant speedups.
    
    **Vanilla RNN**
    
    Applies a single-gate recurrent layer to input X. Two kinds of activation function are supported:
    ReLU and Tanh.
    
    With ReLU activation function:
    
    .. math::
        h_t = relu(W_{ih} * x_t + b_{ih}  +  W_{hh} * h_{(t-1)} + b_{hh})
    
    With Tanh activtion function:
    
    .. math::
        h_t = \tanh(W_{ih} * x_t + b_{ih}  +  W_{hh} * h_{(t-1)} + b_{hh})
    
    Reference paper: Finding structure in time - Elman, 1988.
    https://crl.ucsd.edu/~elman/Papers/fsit.pdf
    
    **LSTM**
    
    Long Short-Term Memory - Hochreiter, 1997. http://www.bioinf.jku.at/publications/older/2604.pdf
    
    .. math::
      \begin{array}{ll}
                i_t = \mathrm{sigmoid}(W_{ii} x_t + b_{ii} + W_{hi} h_{(t-1)} + b_{hi}) \\
                f_t = \mathrm{sigmoid}(W_{if} x_t + b_{if} + W_{hf} h_{(t-1)} + b_{hf}) \\
                g_t = \tanh(W_{ig} x_t + b_{ig} + W_{hc} h_{(t-1)} + b_{hg}) \\
                o_t = \mathrm{sigmoid}(W_{io} x_t + b_{io} + W_{ho} h_{(t-1)} + b_{ho}) \\
                c_t = f_t * c_{(t-1)} + i_t * g_t \\
                h_t = o_t * \tanh(c_t)
                \end{array}
    
    With the projection size being set, LSTM could use the projection feature to reduce the parameters
    size and give some speedups without significant damage to the accuracy.
    
    Long Short-Term Memory Based Recurrent Neural Network Architectures for Large Vocabulary Speech
    Recognition - Sak et al. 2014. https://arxiv.org/abs/1402.1128
    
    .. math::
      \begin{array}{ll}
                i_t = \mathrm{sigmoid}(W_{ii} x_t + b_{ii} + W_{ri} r_{(t-1)} + b_{ri}) \\
                f_t = \mathrm{sigmoid}(W_{if} x_t + b_{if} + W_{rf} r_{(t-1)} + b_{rf}) \\
                g_t = \tanh(W_{ig} x_t + b_{ig} + W_{rc} r_{(t-1)} + b_{rg}) \\
                o_t = \mathrm{sigmoid}(W_{io} x_t + b_{o} + W_{ro} r_{(t-1)} + b_{ro}) \\
                c_t = f_t * c_{(t-1)} + i_t * g_t \\
                h_t = o_t * \tanh(c_t)
                r_t = W_{hr} h_t
                \end{array}
    
    **GRU**
    
    Gated Recurrent Unit - Cho et al. 2014. http://arxiv.org/abs/1406.1078
    
    The definition of GRU here is slightly different from paper but compatible with CUDNN.
    
    .. math::
      \begin{array}{ll}
                r_t = \mathrm{sigmoid}(W_{ir} x_t + b_{ir} + W_{hr} h_{(t-1)} + b_{hr}) \\
                z_t = \mathrm{sigmoid}(W_{iz} x_t + b_{iz} + W_{hz} h_{(t-1)} + b_{hz}) \\
                n_t = \tanh(W_{in} x_t + b_{in} + r_t * (W_{hn} h_{(t-1)}+ b_{hn})) \\
                h_t = (1 - z_t) * n_t + z_t * h_{(t-1)} \\
                \end{array}
    
    
    Defined in src/operator/rnn.cc:L375
    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  30. abstract def ROIPooling(data: NDArray, rois: NDArray, pooled_size: Shape, spatial_scale: Float, out: NDArray): Array[NDArray]

    Permalink

    Performs region of interest(ROI) pooling on the input array.
    
    ROI pooling is a variant of a max pooling layer, in which the output size is fixed and
    region of interest is a parameter. Its purpose is to perform max pooling on the inputs
    of non-uniform sizes to obtain fixed-size feature maps. ROI pooling is a neural-net
    layer mostly used in training a `Fast R-CNN` network for object detection.
    
    This operator takes a 4D feature map as an input array and region proposals as `rois`,
    then it pools over sub-regions of input and produces a fixed-sized output array
    regardless of the ROI size.
    
    To crop the feature map accordingly, you can resize the bounding box coordinates
    by changing the parameters `rois` and `spatial_scale`.
    
    The cropped feature maps are pooled by standard max pooling operation to a fixed size output
    indicated by a `pooled_size` parameter. batch_size will change to the number of region
    bounding boxes after `ROIPooling`.
    
    The size of each region of interest doesn't have to be perfectly divisible by
    the number of pooling sections(`pooled_size`).
    
    Example::
    
      x = `[ [`[ [  0.,   1.,   2.,   3.,   4.,   5.],
             [  6.,   7.,   8.,   9.,  10.,  11.],
             [ 12.,  13.,  14.,  15.,  16.,  17.],
             [ 18.,  19.,  20.,  21.,  22.,  23.],
             [ 24.,  25.,  26.,  27.,  28.,  29.],
             [ 30.,  31.,  32.,  33.,  34.,  35.],
             [ 36.,  37.,  38.,  39.,  40.,  41.],
             [ 42.,  43.,  44.,  45.,  46.,  47.] ] ] ]
    
      // region of interest i.e. bounding box coordinates.
      y = `[ [0,0,0,4,4] ]
    
      // returns array of shape (2,2) according to the given roi with max pooling.
      ROIPooling(x, y, (2,2), 1.0) = `[ [`[ [ 14.,  16.],
                                        [ 26.,  28.] ] ] ]
    
      // region of interest is changed due to the change in `spacial_scale` parameter.
      ROIPooling(x, y, (2,2), 0.7) = `[ [`[ [  7.,   9.],
                                        [ 19.,  21.] ] ] ]
    
    
    
    Defined in src/operator/roi_pooling.cc:L224
    data

    The input array to the pooling operator, a 4D Feature maps

    rois

    Bounding box coordinates, a 2D array of [ [batch_index, x1, y1, x2, y2] ], where (x1, y1) and (x2, y2) are top left and bottom right corners of designated region of interest. batch_index indicates the index of corresponding image in the input array

    pooled_size

    ROI pooling output shape (h,w)

    spatial_scale

    Ratio of input feature map height (or w) to raw image height (or w). Equals the reciprocal of total stride in convolutional layers

    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  31. abstract def SVMOutput(po: SVMOutputParam): Array[NDArray]

    Permalink

    Computes support vector machine based transformation of the input.
    
    This tutorial demonstrates using SVM as output layer for classification instead of softmax:
    https://github.com/apache/mxnet/tree/v1.x/example/svm_mnist.
    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  32. abstract def SequenceLast(po: SequenceLastParam): Array[NDArray]

    Permalink

    Takes the last element of a sequence.
    
    This function takes an n-dimensional input array of the form
    [max_sequence_length, batch_size, other_feature_dims] and returns a (n-1)-dimensional array
    of the form [batch_size, other_feature_dims].
    
    Parameter `sequence_length` is used to handle variable-length sequences. `sequence_length` should be
    an input array of positive ints of dimension [batch_size]. To use this parameter,
    set `use_sequence_length` to `True`, otherwise each example in the batch is assumed
    to have the max sequence length.
    
    .. note:: Alternatively, you can also use `take` operator.
    
    Example::
    
       x = `[ `[ [  1.,   2.,   3.],
             [  4.,   5.,   6.],
             [  7.,   8.,   9.] ],
    
            `[ [ 10.,   11.,   12.],
             [ 13.,   14.,   15.],
             [ 16.,   17.,   18.] ],
    
            `[ [  19.,   20.,   21.],
             [  22.,   23.,   24.],
             [  25.,   26.,   27.] ] ]
    
       // returns last sequence when sequence_length parameter is not used
       SequenceLast(x) = `[ [  19.,   20.,   21.],
                          [  22.,   23.,   24.],
                          [  25.,   26.,   27.] ]
    
       // sequence_length is used
       SequenceLast(x, sequence_length=[1,1,1], use_sequence_length=True) =
                `[ [  1.,   2.,   3.],
                 [  4.,   5.,   6.],
                 [  7.,   8.,   9.] ]
    
       // sequence_length is used
       SequenceLast(x, sequence_length=[1,2,3], use_sequence_length=True) =
                `[ [  1.,    2.,   3.],
                 [  13.,  14.,  15.],
                 [  25.,  26.,  27.] ]
    
    
    
    Defined in src/operator/sequence_last.cc:L105
    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  33. abstract def SequenceMask(po: SequenceMaskParam): Array[NDArray]

    Permalink

    Sets all elements outside the sequence to a constant value.
    
    This function takes an n-dimensional input array of the form
    [max_sequence_length, batch_size, other_feature_dims] and returns an array of the same shape.
    
    Parameter `sequence_length` is used to handle variable-length sequences. `sequence_length`
    should be an input array of positive ints of dimension [batch_size].
    To use this parameter, set `use_sequence_length` to `True`,
    otherwise each example in the batch is assumed to have the max sequence length and
    this operator works as the `identity` operator.
    
    Example::
    
       x = `[ `[ [  1.,   2.,   3.],
             [  4.,   5.,   6.] ],
    
            `[ [  7.,   8.,   9.],
             [ 10.,  11.,  12.] ],
    
            `[ [ 13.,  14.,   15.],
             [ 16.,  17.,   18.] ] ]
    
       // Batch 1
       B1 = `[ [  1.,   2.,   3.],
             [  7.,   8.,   9.],
             [ 13.,  14.,  15.] ]
    
       // Batch 2
       B2 = `[ [  4.,   5.,   6.],
             [ 10.,  11.,  12.],
             [ 16.,  17.,  18.] ]
    
       // works as identity operator when sequence_length parameter is not used
       SequenceMask(x) = `[ `[ [  1.,   2.,   3.],
                           [  4.,   5.,   6.] ],
    
                          `[ [  7.,   8.,   9.],
                           [ 10.,  11.,  12.] ],
    
                          `[ [ 13.,  14.,   15.],
                           [ 16.,  17.,   18.] ] ]
    
       // sequence_length [1,1] means 1 of each batch will be kept
       // and other rows are masked with default mask value = 0
       SequenceMask(x, sequence_length=[1,1], use_sequence_length=True) =
                    `[ `[ [  1.,   2.,   3.],
                      [  4.,   5.,   6.] ],
    
                     `[ [  0.,   0.,   0.],
                      [  0.,   0.,   0.] ],
    
                     `[ [  0.,   0.,   0.],
                      [  0.,   0.,   0.] ] ]
    
       // sequence_length [2,3] means 2 of batch B1 and 3 of batch B2 will be kept
       // and other rows are masked with value = 1
       SequenceMask(x, sequence_length=[2,3], use_sequence_length=True, value=1) =
                    `[ `[ [  1.,   2.,   3.],
                      [  4.,   5.,   6.] ],
    
                     `[ [  7.,   8.,   9.],
                      [  10.,  11.,  12.] ],
    
                     `[ [   1.,   1.,   1.],
                      [  16.,  17.,  18.] ] ]
    
    
    
    Defined in src/operator/sequence_mask.cc:L185
    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  34. abstract def SequenceReverse(po: SequenceReverseParam): Array[NDArray]

    Permalink

    Reverses the elements of each sequence.
    
    This function takes an n-dimensional input array of the form [max_sequence_length, batch_size, other_feature_dims]
    and returns an array of the same shape.
    
    Parameter `sequence_length` is used to handle variable-length sequences.
    `sequence_length` should be an input array of positive ints of dimension [batch_size].
    To use this parameter, set `use_sequence_length` to `True`,
    otherwise each example in the batch is assumed to have the max sequence length.
    
    Example::
    
       x = `[ `[ [  1.,   2.,   3.],
             [  4.,   5.,   6.] ],
    
            `[ [  7.,   8.,   9.],
             [ 10.,  11.,  12.] ],
    
            `[ [ 13.,  14.,   15.],
             [ 16.,  17.,   18.] ] ]
    
       // Batch 1
       B1 = `[ [  1.,   2.,   3.],
             [  7.,   8.,   9.],
             [ 13.,  14.,  15.] ]
    
       // Batch 2
       B2 = `[ [  4.,   5.,   6.],
             [ 10.,  11.,  12.],
             [ 16.,  17.,  18.] ]
    
       // returns reverse sequence when sequence_length parameter is not used
       SequenceReverse(x) = `[ `[ [ 13.,  14.,   15.],
                              [ 16.,  17.,   18.] ],
    
                             `[ [  7.,   8.,   9.],
                              [ 10.,  11.,  12.] ],
    
                             `[ [  1.,   2.,   3.],
                              [  4.,   5.,   6.] ] ]
    
       // sequence_length [2,2] means 2 rows of
       // both batch B1 and B2 will be reversed.
       SequenceReverse(x, sequence_length=[2,2], use_sequence_length=True) =
                         `[ `[ [  7.,   8.,   9.],
                           [ 10.,  11.,  12.] ],
    
                          `[ [  1.,   2.,   3.],
                           [  4.,   5.,   6.] ],
    
                          `[ [ 13.,  14.,   15.],
                           [ 16.,  17.,   18.] ] ]
    
       // sequence_length [2,3] means 2 of batch B2 and 3 of batch B3
       // will be reversed.
       SequenceReverse(x, sequence_length=[2,3], use_sequence_length=True) =
                        `[ `[ [  7.,   8.,   9.],
                          [ 16.,  17.,  18.] ],
    
                         `[ [  1.,   2.,   3.],
                          [ 10.,  11.,  12.] ],
    
                         `[ [ 13.,  14,   15.],
                          [  4.,   5.,   6.] ] ]
    
    
    
    Defined in src/operator/sequence_reverse.cc:L121
    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  35. abstract def SliceChannel(po: SliceChannelParam): Array[NDArray]

    Permalink

    Splits an array along a particular axis into multiple sub-arrays.
    
    .. note:: ``SliceChannel`` is deprecated. Use ``split`` instead.
    
    **Note** that `num_outputs` should evenly divide the length of the axis
    along which to split the array.
    
    Example::
    
       x  = `[ `[ [ 1.]
              [ 2.] ]
             `[ [ 3.]
              [ 4.] ]
             `[ [ 5.]
              [ 6.] ] ]
       x.shape = (3, 2, 1)
    
       y = split(x, axis=1, num_outputs=2) // a list of 2 arrays with shape (3, 1, 1)
       y = `[ `[ [ 1.] ]
            `[ [ 3.] ]
            `[ [ 5.] ] ]
    
           `[ `[ [ 2.] ]
            `[ [ 4.] ]
            `[ [ 6.] ] ]
    
       y[0].shape = (3, 1, 1)
    
       z = split(x, axis=0, num_outputs=3) // a list of 3 arrays with shape (1, 2, 1)
       z = `[ `[ [ 1.]
             [ 2.] ] ]
    
           `[ `[ [ 3.]
             [ 4.] ] ]
    
           `[ `[ [ 5.]
             [ 6.] ] ]
    
       z[0].shape = (1, 2, 1)
    
    `squeeze_axis=1` removes the axis with length 1 from the shapes of the output arrays.
    **Note** that setting `squeeze_axis` to ``1`` removes axis with length 1 only
    along the `axis` which it is split.
    Also `squeeze_axis` can be set to true only if ``input.shape[axis] == num_outputs``.
    
    Example::
    
       z = split(x, axis=0, num_outputs=3, squeeze_axis=1) // a list of 3 arrays with shape (2, 1)
       z = `[ [ 1.]
            [ 2.] ]
    
           `[ [ 3.]
            [ 4.] ]
    
           `[ [ 5.]
            [ 6.] ]
       z[0].shape = (2 ,1 )
    
    
    
    Defined in src/operator/slice_channel.cc:L106
    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  36. abstract def SoftmaxActivation(data: NDArray, mode: String, out: NDArray): Array[NDArray]

    Permalink

    Applies softmax activation to input. This is intended for internal layers.
    
    .. note::
    
      This operator has been deprecated, please use `softmax`.
    
    If `mode` = ``instance``, this operator will compute a softmax for each instance in the batch.
    This is the default mode.
    
    If `mode` = ``channel``, this operator will compute a k-class softmax at each position
    of each instance, where `k` = ``num_channel``. This mode can only be used when the input array
    has at least 3 dimensions.
    This can be used for `fully convolutional network`, `image segmentation`, etc.
    
    Example::
    
      >>> input_array = mx.nd.array(`[ [3., 0.5, -0.5, 2., 7.],
      >>>                            [2., -.4, 7.,   3., 0.2] ])
      >>> softmax_act = mx.nd.SoftmaxActivation(input_array)
      >>> print softmax_act.asnumpy()
      `[ [  1.78322066e-02   1.46375655e-03   5.38485940e-04   6.56010211e-03   9.73605454e-01]
       [  6.56221947e-03   5.95310994e-04   9.73919690e-01   1.78379621e-02   1.08472735e-03] ]
    
    
    
    Defined in src/operator/nn/softmax_activation.cc:L58
    data

    The input array.

    mode

    Specifies how to compute the softmax. If set to instance, it computes softmax for each instance. If set to channel, It computes cross channel softmax for each position of each instance.

    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  37. abstract def SoftmaxOutput(po: SoftmaxOutputParam): Array[NDArray]

    Permalink

    Computes the gradient of cross entropy loss with respect to softmax output.
    
    - This operator computes the gradient in two steps.
      The cross entropy loss does not actually need to be computed.
    
      - Applies softmax function on the input array.
      - Computes and returns the gradient of cross entropy loss w.r.t. the softmax output.
    
    - The softmax function, cross entropy loss and gradient is given by:
    
      - Softmax Function:
    
        .. math:: \text{softmax}(x)_i = \frac{exp(x_i)}{\sum_j exp(x_j)}
    
      - Cross Entropy Function:
    
        .. math:: \text{CE(label, output)} = - \sum_i \text{label}_i \log(\text{output}_i)
    
      - The gradient of cross entropy loss w.r.t softmax output:
    
        .. math:: \text{gradient} = \text{output} - \text{label}
    
    - During forward propagation, the softmax function is computed for each instance in the input array.
    
      For general *N*-D input arrays with shape :math:`(d_1, d_2, ..., d_n)`. The size is
      :math:`s=d_1 \cdot d_2 \cdot \cdot \cdot d_n`. We can use the parameters `preserve_shape`
      and `multi_output` to specify the way to compute softmax:
    
      - By default, `preserve_shape` is ``false``. This operator will reshape the input array
        into a 2-D array with shape :math:`(d_1, \frac{s}{d_1})` and then compute the softmax function for
        each row in the reshaped array, and afterwards reshape it back to the original shape
        :math:`(d_1, d_2, ..., d_n)`.
      - If `preserve_shape` is ``true``, the softmax function will be computed along
        the last axis (`axis` = ``-1``).
      - If `multi_output` is ``true``, the softmax function will be computed along
        the second axis (`axis` = ``1``).
    
    - During backward propagation, the gradient of cross-entropy loss w.r.t softmax output array is computed.
      The provided label can be a one-hot label array or a probability label array.
    
      - If the parameter `use_ignore` is ``true``, `ignore_label` can specify input instances
        with a particular label to be ignored during backward propagation. **This has no effect when
        softmax `output` has same shape as `label`**.
    
        Example::
    
          data = `[ [1,2,3,4],[2,2,2,2],[3,3,3,3],[4,4,4,4] ]
          label = [1,0,2,3]
          ignore_label = 1
          SoftmaxOutput(data=data, label = label,\
                        multi_output=true, use_ignore=true,\
                        ignore_label=ignore_label)
          ## forward softmax output
          `[ [ 0.0320586   0.08714432  0.23688284  0.64391428]
           [ 0.25        0.25        0.25        0.25      ]
           [ 0.25        0.25        0.25        0.25      ]
           [ 0.25        0.25        0.25        0.25      ] ]
          ## backward gradient output
          `[ [ 0.    0.    0.    0.  ]
           [-0.75  0.25  0.25  0.25]
           [ 0.25  0.25 -0.75  0.25]
           [ 0.25  0.25  0.25 -0.75] ]
          ## notice that the first row is all 0 because label[0] is 1, which is equal to ignore_label.
    
      - The parameter `grad_scale` can be used to rescale the gradient, which is often used to
        give each loss function different weights.
    
      - This operator also supports various ways to normalize the gradient by `normalization`,
        The `normalization` is applied if softmax output has different shape than the labels.
        The `normalization` mode can be set to the followings:
    
        - ``'null'``: do nothing.
        - ``'batch'``: divide the gradient by the batch size.
        - ``'valid'``: divide the gradient by the number of instances which are not ignored.
    
    
    
    Defined in src/operator/softmax_output.cc:L242
    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  38. abstract def SpatialTransformer(po: SpatialTransformerParam): Array[NDArray]

    Permalink

    Applies a spatial transformer to input feature map.
    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  39. abstract def SwapAxis(po: SwapAxisParam): Array[NDArray]

    Permalink

    Interchanges two axes of an array.
    
    Examples::
    
      x = `[ [1, 2, 3] ])
      swapaxes(x, 0, 1) = `[ [ 1],
                           [ 2],
                           [ 3] ]
    
      x = `[ `[ [ 0, 1],
            [ 2, 3] ],
           `[ [ 4, 5],
            [ 6, 7] ] ]  // (2,2,2) array
    
     swapaxes(x, 0, 2) = `[ `[ [ 0, 4],
                           [ 2, 6] ],
                          `[ [ 1, 5],
                           [ 3, 7] ] ]
    
    
    Defined in src/operator/swapaxis.cc:L69
    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  40. abstract def UpSampling(po: UpSamplingParam): Array[NDArray]

    Permalink

    Upsamples the given input data.
    
    Two algorithms (``sample_type``) are available for upsampling:
    
    - Nearest Neighbor
    - Bilinear
    
    **Nearest Neighbor Upsampling**
    
    Input data is expected to be NCHW.
    
    Example::
    
      x = `[ [`[ [1. 1. 1.]
             [1. 1. 1.]
             [1. 1. 1.] ] ] ]
    
      UpSampling(x, scale=2, sample_type='nearest') = `[ [`[ [1. 1. 1. 1. 1. 1.]
                                                         [1. 1. 1. 1. 1. 1.]
                                                         [1. 1. 1. 1. 1. 1.]
                                                         [1. 1. 1. 1. 1. 1.]
                                                         [1. 1. 1. 1. 1. 1.]
                                                         [1. 1. 1. 1. 1. 1.] ] ] ]
    
    **Bilinear Upsampling**
    
    Uses `deconvolution` algorithm under the hood. You need provide both input data and the kernel.
    
    Input data is expected to be NCHW.
    
    `num_filter` is expected to be same as the number of channels.
    
    Example::
    
      x = `[ [`[ [1. 1. 1.]
             [1. 1. 1.]
             [1. 1. 1.] ] ] ]
    
      w = `[ [`[ [1. 1. 1. 1.]
             [1. 1. 1. 1.]
             [1. 1. 1. 1.]
             [1. 1. 1. 1.] ] ] ]
    
      UpSampling(x, w, scale=2, sample_type='bilinear', num_filter=1) = `[ [`[ [1. 2. 2. 2. 2. 1.]
                                                                           [2. 4. 4. 4. 4. 2.]
                                                                           [2. 4. 4. 4. 4. 2.]
                                                                           [2. 4. 4. 4. 4. 2.]
                                                                           [2. 4. 4. 4. 4. 2.]
                                                                           [1. 2. 2. 2. 2. 1.] ] ] ]
    
    
    Defined in src/operator/nn/upsampling.cc:L172
    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  41. abstract def abs(data: NDArray, out: NDArray): Array[NDArray]

    Permalink

    Returns element-wise absolute value of the input.
    
    Example::
    
       abs([-2, 0, 3]) = [2, 0, 3]
    
    The storage type of ``abs`` output depends upon the input storage type:
    
       - abs(default) = default
       - abs(row_sparse) = row_sparse
       - abs(csr) = csr
    
    
    
    Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L720
    data

    The input array.

    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  42. abstract def adam_update(po: adam_updateParam): Array[NDArray]

    Permalink

    Update function for Adam optimizer. Adam is seen as a generalization
    of AdaGrad.
    
    Adam update consists of the following steps, where g represents gradient and m, v
    are 1st and 2nd order moment estimates (mean and variance).
    
    .. math::
    
     g_t = \nabla J(W_{t-1})\\
     m_t = \beta_1 m_{t-1} + (1 - \beta_1) g_t\\
     v_t = \beta_2 v_{t-1} + (1 - \beta_2) g_t^2\\
     W_t = W_{t-1} - \alpha \frac{ m_t }{ \sqrt{ v_t } + \epsilon }
    
    It updates the weights using::
    
     m = beta1*m + (1-beta1)*grad
     v = beta2*v + (1-beta2)*(grad**2)
     w += - learning_rate * m / (sqrt(v) + epsilon)
    
    However, if grad's storage type is ``row_sparse``, ``lazy_update`` is True and the storage
    type of weight is the same as those of m and v,
    only the row slices whose indices appear in grad.indices are updated (for w, m and v)::
    
     for row in grad.indices:
         m[row] = beta1*m[row] + (1-beta1)*grad[row]
         v[row] = beta2*v[row] + (1-beta2)*(grad[row]**2)
         w[row] += - learning_rate * m[row] / (sqrt(v[row]) + epsilon)
    
    
    
    Defined in src/operator/optimizer_op.cc:L687
    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  43. abstract def add_n(args: Array[NDArray], out: NDArray): Array[NDArray]

    Permalink

    Adds all input arguments element-wise.
    
    .. math::
       add\_n(a_1, a_2, ..., a_n) = a_1 + a_2 + ... + a_n
    
    ``add_n`` is potentially more efficient than calling ``add`` by `n` times.
    
    The storage type of ``add_n`` output depends on storage types of inputs
    
    - add_n(row_sparse, row_sparse, ..) = row_sparse
    - add_n(default, csr, default) = default
    - add_n(any input combinations longer than 4 (>4) with at least one default type) = default
    - otherwise, ``add_n`` falls all inputs back to default storage and generates default storage
    
    
    
    Defined in src/operator/tensor/elemwise_sum.cc:L155
    args

    Positional input arguments

    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  44. abstract def all_finite(data: NDArray, init_output: Boolean, out: NDArray): Array[NDArray]

    Permalink

    Check if all the float numbers in the array are finite (used for AMP)
    
    
    Defined in src/operator/contrib/all_finite.cc:L100
    data

    Array

    init_output

    Initialize output to 1.

    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  45. abstract def amp_cast(data: NDArray, dtype: String, out: NDArray): Array[NDArray]

    Permalink

    Cast function between low precision float/FP32 used by AMP.
    
    It casts only between low precision float/FP32 and does not do anything for other types.
    
    
    Defined in src/operator/tensor/amp_cast.cc:L125
    data

    The input.

    dtype

    Output data type.

    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  46. abstract def amp_multicast(data: Array[NDArray], num_outputs: Integer, cast_narrow: Boolean, out: NDArray): Array[NDArray]

    Permalink

    Cast function used by AMP, that casts its inputs to the common widest type.
    
    It casts only between low precision float/FP32 and does not do anything for other types.
    
    
    
    Defined in src/operator/tensor/amp_cast.cc:L169
    data

    Weights

    num_outputs

    Number of input/output pairs to be casted to the widest type.

    cast_narrow

    Whether to cast to the narrowest type

    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  47. abstract def arccos(data: NDArray, out: NDArray): Array[NDArray]

    Permalink

    Returns element-wise inverse cosine of the input array.
    
    The input should be in range `[-1, 1]`.
    The output is in the closed interval :math:`[0, \pi]`
    
    .. math::
       arccos([-1, -.707, 0, .707, 1]) = [\pi, 3\pi/4, \pi/2, \pi/4, 0]
    
    The storage type of ``arccos`` output is always dense
    
    
    
    Defined in src/operator/tensor/elemwise_unary_op_trig.cc:L233
    data

    The input array.

    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  48. abstract def arccosh(data: NDArray, out: NDArray): Array[NDArray]

    Permalink

    Returns the element-wise inverse hyperbolic cosine of the input array, \
    computed element-wise.
    
    The storage type of ``arccosh`` output is always dense
    
    
    
    Defined in src/operator/tensor/elemwise_unary_op_trig.cc:L535
    data

    The input array.

    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  49. abstract def arcsin(data: NDArray, out: NDArray): Array[NDArray]

    Permalink

    Returns element-wise inverse sine of the input array.
    
    The input should be in the range `[-1, 1]`.
    The output is in the closed interval of [:math:`-\pi/2`, :math:`\pi/2`].
    
    .. math::
       arcsin([-1, -.707, 0, .707, 1]) = [-\pi/2, -\pi/4, 0, \pi/4, \pi/2]
    
    The storage type of ``arcsin`` output depends upon the input storage type:
    
       - arcsin(default) = default
       - arcsin(row_sparse) = row_sparse
       - arcsin(csr) = csr
    
    
    
    Defined in src/operator/tensor/elemwise_unary_op_trig.cc:L187
    data

    The input array.

    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  50. abstract def arcsinh(data: NDArray, out: NDArray): Array[NDArray]

    Permalink

    Returns the element-wise inverse hyperbolic sine of the input array, \
    computed element-wise.
    
    The storage type of ``arcsinh`` output depends upon the input storage type:
    
       - arcsinh(default) = default
       - arcsinh(row_sparse) = row_sparse
       - arcsinh(csr) = csr
    
    
    
    Defined in src/operator/tensor/elemwise_unary_op_trig.cc:L494
    data

    The input array.

    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  51. abstract def arctan(data: NDArray, out: NDArray): Array[NDArray]

    Permalink

    Returns element-wise inverse tangent of the input array.
    
    The output is in the closed interval :math:`[-\pi/2, \pi/2]`
    
    .. math::
       arctan([-1, 0, 1]) = [-\pi/4, 0, \pi/4]
    
    The storage type of ``arctan`` output depends upon the input storage type:
    
       - arctan(default) = default
       - arctan(row_sparse) = row_sparse
       - arctan(csr) = csr
    
    
    
    Defined in src/operator/tensor/elemwise_unary_op_trig.cc:L282
    data

    The input array.

    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  52. abstract def arctanh(data: NDArray, out: NDArray): Array[NDArray]

    Permalink

    Returns the element-wise inverse hyperbolic tangent of the input array, \
    computed element-wise.
    
    The storage type of ``arctanh`` output depends upon the input storage type:
    
       - arctanh(default) = default
       - arctanh(row_sparse) = row_sparse
       - arctanh(csr) = csr
    
    
    
    Defined in src/operator/tensor/elemwise_unary_op_trig.cc:L579
    data

    The input array.

    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  53. abstract def argmax(po: argmaxParam): Array[NDArray]

    Permalink

    Returns indices of the maximum values along an axis.
    
    In the case of multiple occurrences of maximum values, the indices corresponding to the first occurrence
    are returned.
    
    Examples::
    
      x = `[ [ 0.,  1.,  2.],
           [ 3.,  4.,  5.] ]
    
      // argmax along axis 0
      argmax(x, axis=0) = [ 1.,  1.,  1.]
    
      // argmax along axis 1
      argmax(x, axis=1) = [ 2.,  2.]
    
      // argmax along axis 1 keeping same dims as an input array
      argmax(x, axis=1, keepdims=True) = `[ [ 2.],
                                          [ 2.] ]
    
    
    
    Defined in src/operator/tensor/broadcast_reduce_op_index.cc:L51
    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  54. abstract def argmax_channel(data: NDArray, out: NDArray): Array[NDArray]

    Permalink

    Returns argmax indices of each channel from the input array.
    
    The result will be an NDArray of shape (num_channel,).
    
    In case of multiple occurrences of the maximum values, the indices corresponding to the first occurrence
    are returned.
    
    Examples::
    
      x = `[ [ 0.,  1.,  2.],
           [ 3.,  4.,  5.] ]
    
      argmax_channel(x) = [ 2.,  2.]
    
    
    
    Defined in src/operator/tensor/broadcast_reduce_op_index.cc:L96
    data

    The input array

    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  55. abstract def argmin(po: argminParam): Array[NDArray]

    Permalink

    Returns indices of the minimum values along an axis.
    
    In the case of multiple occurrences of minimum values, the indices corresponding to the first occurrence
    are returned.
    
    Examples::
    
      x = `[ [ 0.,  1.,  2.],
           [ 3.,  4.,  5.] ]
    
      // argmin along axis 0
      argmin(x, axis=0) = [ 0.,  0.,  0.]
    
      // argmin along axis 1
      argmin(x, axis=1) = [ 0.,  0.]
    
      // argmin along axis 1 keeping same dims as an input array
      argmin(x, axis=1, keepdims=True) = `[ [ 0.],
                                          [ 0.] ]
    
    
    
    Defined in src/operator/tensor/broadcast_reduce_op_index.cc:L76
    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  56. abstract def argsort(po: argsortParam): Array[NDArray]

    Permalink

    Returns the indices that would sort an input array along the given axis.
    
    This function performs sorting along the given axis and returns an array of indices having same shape
    as an input array that index data in sorted order.
    
    Examples::
    
      x = `[ [ 0.3,  0.2,  0.4],
           [ 0.1,  0.3,  0.2] ]
    
      // sort along axis -1
      argsort(x) = `[ [ 1.,  0.,  2.],
                    [ 0.,  2.,  1.] ]
    
      // sort along axis 0
      argsort(x, axis=0) = `[ [ 1.,  0.,  1.]
                            [ 0.,  1.,  0.] ]
    
      // flatten and then sort
      argsort(x, axis=None) = [ 3.,  1.,  5.,  0.,  4.,  2.]
    
    
    Defined in src/operator/tensor/ordering_op.cc:L184
    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  57. abstract def batch_dot(po: batch_dotParam): Array[NDArray]

    Permalink

    Batchwise dot product.
    
    ``batch_dot`` is used to compute dot product of ``x`` and ``y`` when ``x`` and
    ``y`` are data in batch, namely N-D (N >= 3) arrays in shape of `(B0, ..., B_i, :, :)`.
    
    For example, given ``x`` with shape `(B_0, ..., B_i, N, M)` and ``y`` with shape
    `(B_0, ..., B_i, M, K)`, the result array will have shape `(B_0, ..., B_i, N, K)`,
    which is computed by::
    
       batch_dot(x,y)[b_0, ..., b_i, :, :] = dot(x[b_0, ..., b_i, :, :], y[b_0, ..., b_i, :, :])
    
    
    
    Defined in src/operator/tensor/dot.cc:L127
    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  58. abstract def batch_take(a: NDArray, indices: NDArray, out: NDArray): Array[NDArray]

    Permalink

    Takes elements from a data batch.
    
    .. note::
      `batch_take` is deprecated. Use `pick` instead.
    
    Given an input array of shape ``(d0, d1)`` and indices of shape ``(i0,)``, the result will be
    an output array of shape ``(i0,)`` with::
    
      output[i] = input[i, indices[i] ]
    
    Examples::
    
      x = `[ [ 1.,  2.],
           [ 3.,  4.],
           [ 5.,  6.] ]
    
      // takes elements with specified indices
      batch_take(x, [0,1,0]) = [ 1.  4.  5.]
    
    
    
    Defined in src/operator/tensor/indexing_op.cc:L835
    a

    The input array

    indices

    The index array

    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  59. abstract def broadcast_add(lhs: NDArray, rhs: NDArray, out: NDArray): Array[NDArray]

    Permalink

    Returns element-wise sum of the input arrays with broadcasting.
    
    `broadcast_plus` is an alias to the function `broadcast_add`.
    
    Example::
    
       x = `[ [ 1.,  1.,  1.],
            [ 1.,  1.,  1.] ]
    
       y = `[ [ 0.],
            [ 1.] ]
    
       broadcast_add(x, y) = `[ [ 1.,  1.,  1.],
                              [ 2.,  2.,  2.] ]
    
       broadcast_plus(x, y) = `[ [ 1.,  1.,  1.],
                               [ 2.,  2.,  2.] ]
    
    Supported sparse operations:
    
       broadcast_add(csr, dense(1D)) = dense
       broadcast_add(dense(1D), csr) = dense
    
    
    
    Defined in src/operator/tensor/elemwise_binary_broadcast_op_basic.cc:L57
    lhs

    First input to the function

    rhs

    Second input to the function

    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  60. abstract def broadcast_axes(po: broadcast_axesParam): Array[NDArray]

    Permalink

    Broadcasts the input array over particular axes.
    
    Broadcasting is allowed on axes with size 1, such as from `(2,1,3,1)` to
    `(2,8,3,9)`. Elements will be duplicated on the broadcasted axes.
    
    `broadcast_axes` is an alias to the function `broadcast_axis`.
    
    Example::
    
       // given x of shape (1,2,1)
       x = `[ `[ [ 1.],
             [ 2.] ] ]
    
       // broadcast x on on axis 2
       broadcast_axis(x, axis=2, size=3) = `[ `[ [ 1.,  1.,  1.],
                                             [ 2.,  2.,  2.] ] ]
       // broadcast x on on axes 0 and 2
       broadcast_axis(x, axis=(0,2), size=(2,3)) = `[ `[ [ 1.,  1.,  1.],
                                                     [ 2.,  2.,  2.] ],
                                                    `[ [ 1.,  1.,  1.],
                                                     [ 2.,  2.,  2.] ] ]
    
    
    Defined in src/operator/tensor/broadcast_reduce_op_value.cc:L92
    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  61. abstract def broadcast_axis(po: broadcast_axisParam): Array[NDArray]

    Permalink

    Broadcasts the input array over particular axes.
    
    Broadcasting is allowed on axes with size 1, such as from `(2,1,3,1)` to
    `(2,8,3,9)`. Elements will be duplicated on the broadcasted axes.
    
    `broadcast_axes` is an alias to the function `broadcast_axis`.
    
    Example::
    
       // given x of shape (1,2,1)
       x = `[ `[ [ 1.],
             [ 2.] ] ]
    
       // broadcast x on on axis 2
       broadcast_axis(x, axis=2, size=3) = `[ `[ [ 1.,  1.,  1.],
                                             [ 2.,  2.,  2.] ] ]
       // broadcast x on on axes 0 and 2
       broadcast_axis(x, axis=(0,2), size=(2,3)) = `[ `[ [ 1.,  1.,  1.],
                                                     [ 2.,  2.,  2.] ],
                                                    `[ [ 1.,  1.,  1.],
                                                     [ 2.,  2.,  2.] ] ]
    
    
    Defined in src/operator/tensor/broadcast_reduce_op_value.cc:L92
    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  62. abstract def broadcast_div(lhs: NDArray, rhs: NDArray, out: NDArray): Array[NDArray]

    Permalink

    Returns element-wise division of the input arrays with broadcasting.
    
    Example::
    
       x = `[ [ 6.,  6.,  6.],
            [ 6.,  6.,  6.] ]
    
       y = `[ [ 2.],
            [ 3.] ]
    
       broadcast_div(x, y) = `[ [ 3.,  3.,  3.],
                              [ 2.,  2.,  2.] ]
    
    Supported sparse operations:
    
       broadcast_div(csr, dense(1D)) = csr
    
    
    
    Defined in src/operator/tensor/elemwise_binary_broadcast_op_basic.cc:L186
    lhs

    First input to the function

    rhs

    Second input to the function

    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  63. abstract def broadcast_equal(lhs: NDArray, rhs: NDArray, out: NDArray): Array[NDArray]

    Permalink

    Returns the result of element-wise **equal to** (==) comparison operation with broadcasting.
    
    Example::
    
       x = `[ [ 1.,  1.,  1.],
            [ 1.,  1.,  1.] ]
    
       y = `[ [ 0.],
            [ 1.] ]
    
       broadcast_equal(x, y) = `[ [ 0.,  0.,  0.],
                                [ 1.,  1.,  1.] ]
    
    
    
    Defined in src/operator/tensor/elemwise_binary_broadcast_op_logic.cc:L45
    lhs

    First input to the function

    rhs

    Second input to the function

    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  64. abstract def broadcast_greater(lhs: NDArray, rhs: NDArray, out: NDArray): Array[NDArray]

    Permalink

    Returns the result of element-wise **greater than** (>) comparison operation with broadcasting.
    
    Example::
    
       x = `[ [ 1.,  1.,  1.],
            [ 1.,  1.,  1.] ]
    
       y = `[ [ 0.],
            [ 1.] ]
    
       broadcast_greater(x, y) = `[ [ 1.,  1.,  1.],
                                  [ 0.,  0.,  0.] ]
    
    
    
    Defined in src/operator/tensor/elemwise_binary_broadcast_op_logic.cc:L81
    lhs

    First input to the function

    rhs

    Second input to the function

    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  65. abstract def broadcast_greater_equal(lhs: NDArray, rhs: NDArray, out: NDArray): Array[NDArray]

    Permalink

    Returns the result of element-wise **greater than or equal to** (>=) comparison operation with broadcasting.
    
    Example::
    
       x = `[ [ 1.,  1.,  1.],
            [ 1.,  1.,  1.] ]
    
       y = `[ [ 0.],
            [ 1.] ]
    
       broadcast_greater_equal(x, y) = `[ [ 1.,  1.,  1.],
                                        [ 1.,  1.,  1.] ]
    
    
    
    Defined in src/operator/tensor/elemwise_binary_broadcast_op_logic.cc:L99
    lhs

    First input to the function

    rhs

    Second input to the function

    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  66. abstract def broadcast_hypot(lhs: NDArray, rhs: NDArray, out: NDArray): Array[NDArray]

    Permalink

     Returns the hypotenuse of a right angled triangle, given its "legs"
    with broadcasting.
    
    It is equivalent to doing :math:`sqrt(x_1^2 + x_2^2)`.
    
    Example::
    
       x = `[ [ 3.,  3.,  3.] ]
    
       y = `[ [ 4.],
            [ 4.] ]
    
       broadcast_hypot(x, y) = `[ [ 5.,  5.,  5.],
                                [ 5.,  5.,  5.] ]
    
       z = `[ [ 0.],
            [ 4.] ]
    
       broadcast_hypot(x, z) = `[ [ 3.,  3.,  3.],
                                [ 5.,  5.,  5.] ]
    
    
    
    Defined in src/operator/tensor/elemwise_binary_broadcast_op_extended.cc:L157
    lhs

    First input to the function

    rhs

    Second input to the function

    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  67. abstract def broadcast_lesser(lhs: NDArray, rhs: NDArray, out: NDArray): Array[NDArray]

    Permalink

    Returns the result of element-wise **lesser than** (<) comparison operation with broadcasting.
    
    Example::
    
       x = `[ [ 1.,  1.,  1.],
            [ 1.,  1.,  1.] ]
    
       y = `[ [ 0.],
            [ 1.] ]
    
       broadcast_lesser(x, y) = `[ [ 0.,  0.,  0.],
                                 [ 0.,  0.,  0.] ]
    
    
    
    Defined in src/operator/tensor/elemwise_binary_broadcast_op_logic.cc:L117
    lhs

    First input to the function

    rhs

    Second input to the function

    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  68. abstract def broadcast_lesser_equal(lhs: NDArray, rhs: NDArray, out: NDArray): Array[NDArray]

    Permalink

    Returns the result of element-wise **lesser than or equal to** (<=) comparison operation with broadcasting.
    
    Example::
    
       x = `[ [ 1.,  1.,  1.],
            [ 1.,  1.,  1.] ]
    
       y = `[ [ 0.],
            [ 1.] ]
    
       broadcast_lesser_equal(x, y) = `[ [ 0.,  0.,  0.],
                                       [ 1.,  1.,  1.] ]
    
    
    
    Defined in src/operator/tensor/elemwise_binary_broadcast_op_logic.cc:L135
    lhs

    First input to the function

    rhs

    Second input to the function

    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  69. abstract def broadcast_like(po: broadcast_likeParam): Array[NDArray]

    Permalink

    Broadcasts lhs to have the same shape as rhs.
    
    Broadcasting is a mechanism that allows NDArrays to perform arithmetic operations
    with arrays of different shapes efficiently without creating multiple copies of arrays.
    Also see, `Broadcasting <https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html>`_ for more explanation.
    
    Broadcasting is allowed on axes with size 1, such as from `(2,1,3,1)` to
    `(2,8,3,9)`. Elements will be duplicated on the broadcasted axes.
    
    For example::
    
       broadcast_like(`[ [1,2,3] ], `[ [5,6,7],[7,8,9] ]) = `[ [ 1.,  2.,  3.],
                                                       [ 1.,  2.,  3.] ])
    
       broadcast_like([9], [1,2,3,4,5], lhs_axes=(0,), rhs_axes=(-1,)) = [9,9,9,9,9]
    
    
    
    Defined in src/operator/tensor/broadcast_reduce_op_value.cc:L178
    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  70. abstract def broadcast_logical_and(lhs: NDArray, rhs: NDArray, out: NDArray): Array[NDArray]

    Permalink

    Returns the result of element-wise **logical and** with broadcasting.
    
    Example::
    
       x = `[ [ 1.,  1.,  1.],
            [ 1.,  1.,  1.] ]
    
       y = `[ [ 0.],
            [ 1.] ]
    
       broadcast_logical_and(x, y) = `[ [ 0.,  0.,  0.],
                                      [ 1.,  1.,  1.] ]
    
    
    
    Defined in src/operator/tensor/elemwise_binary_broadcast_op_logic.cc:L153
    lhs

    First input to the function

    rhs

    Second input to the function

    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  71. abstract def broadcast_logical_or(lhs: NDArray, rhs: NDArray, out: NDArray): Array[NDArray]

    Permalink

    Returns the result of element-wise **logical or** with broadcasting.
    
    Example::
    
       x = `[ [ 1.,  1.,  0.],
            [ 1.,  1.,  0.] ]
    
       y = `[ [ 1.],
            [ 0.] ]
    
       broadcast_logical_or(x, y) = `[ [ 1.,  1.,  1.],
                                     [ 1.,  1.,  0.] ]
    
    
    
    Defined in src/operator/tensor/elemwise_binary_broadcast_op_logic.cc:L171
    lhs

    First input to the function

    rhs

    Second input to the function

    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  72. abstract def broadcast_logical_xor(lhs: NDArray, rhs: NDArray, out: NDArray): Array[NDArray]

    Permalink

    Returns the result of element-wise **logical xor** with broadcasting.
    
    Example::
    
       x = `[ [ 1.,  1.,  0.],
            [ 1.,  1.,  0.] ]
    
       y = `[ [ 1.],
            [ 0.] ]
    
       broadcast_logical_xor(x, y) = `[ [ 0.,  0.,  1.],
                                      [ 1.,  1.,  0.] ]
    
    
    
    Defined in src/operator/tensor/elemwise_binary_broadcast_op_logic.cc:L189
    lhs

    First input to the function

    rhs

    Second input to the function

    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  73. abstract def broadcast_maximum(lhs: NDArray, rhs: NDArray, out: NDArray): Array[NDArray]

    Permalink

    Returns element-wise maximum of the input arrays with broadcasting.
    
    This function compares two input arrays and returns a new array having the element-wise maxima.
    
    Example::
    
       x = `[ [ 1.,  1.,  1.],
            [ 1.,  1.,  1.] ]
    
       y = `[ [ 0.],
            [ 1.] ]
    
       broadcast_maximum(x, y) = `[ [ 1.,  1.,  1.],
                                  [ 1.,  1.,  1.] ]
    
    
    
    Defined in src/operator/tensor/elemwise_binary_broadcast_op_extended.cc:L80
    lhs

    First input to the function

    rhs

    Second input to the function

    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  74. abstract def broadcast_minimum(lhs: NDArray, rhs: NDArray, out: NDArray): Array[NDArray]

    Permalink

    Returns element-wise minimum of the input arrays with broadcasting.
    
    This function compares two input arrays and returns a new array having the element-wise minima.
    
    Example::
    
       x = `[ [ 1.,  1.,  1.],
            [ 1.,  1.,  1.] ]
    
       y = `[ [ 0.],
            [ 1.] ]
    
       broadcast_maximum(x, y) = `[ [ 0.,  0.,  0.],
                                  [ 1.,  1.,  1.] ]
    
    
    
    Defined in src/operator/tensor/elemwise_binary_broadcast_op_extended.cc:L116
    lhs

    First input to the function

    rhs

    Second input to the function

    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  75. abstract def broadcast_minus(lhs: NDArray, rhs: NDArray, out: NDArray): Array[NDArray]

    Permalink

    Returns element-wise difference of the input arrays with broadcasting.
    
    `broadcast_minus` is an alias to the function `broadcast_sub`.
    
    Example::
    
       x = `[ [ 1.,  1.,  1.],
            [ 1.,  1.,  1.] ]
    
       y = `[ [ 0.],
            [ 1.] ]
    
       broadcast_sub(x, y) = `[ [ 1.,  1.,  1.],
                              [ 0.,  0.,  0.] ]
    
       broadcast_minus(x, y) = `[ [ 1.,  1.,  1.],
                                [ 0.,  0.,  0.] ]
    
    Supported sparse operations:
    
       broadcast_sub/minus(csr, dense(1D)) = dense
       broadcast_sub/minus(dense(1D), csr) = dense
    
    
    
    Defined in src/operator/tensor/elemwise_binary_broadcast_op_basic.cc:L105
    lhs

    First input to the function

    rhs

    Second input to the function

    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  76. abstract def broadcast_mod(lhs: NDArray, rhs: NDArray, out: NDArray): Array[NDArray]

    Permalink

    Returns element-wise modulo of the input arrays with broadcasting.
    
    Example::
    
       x = `[ [ 8.,  8.,  8.],
            [ 8.,  8.,  8.] ]
    
       y = `[ [ 2.],
            [ 3.] ]
    
       broadcast_mod(x, y) = `[ [ 0.,  0.,  0.],
                              [ 2.,  2.,  2.] ]
    
    
    
    Defined in src/operator/tensor/elemwise_binary_broadcast_op_basic.cc:L221
    lhs

    First input to the function

    rhs

    Second input to the function

    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  77. abstract def broadcast_mul(lhs: NDArray, rhs: NDArray, out: NDArray): Array[NDArray]

    Permalink

    Returns element-wise product of the input arrays with broadcasting.
    
    Example::
    
       x = `[ [ 1.,  1.,  1.],
            [ 1.,  1.,  1.] ]
    
       y = `[ [ 0.],
            [ 1.] ]
    
       broadcast_mul(x, y) = `[ [ 0.,  0.,  0.],
                              [ 1.,  1.,  1.] ]
    
    Supported sparse operations:
    
       broadcast_mul(csr, dense(1D)) = csr
    
    
    
    Defined in src/operator/tensor/elemwise_binary_broadcast_op_basic.cc:L145
    lhs

    First input to the function

    rhs

    Second input to the function

    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  78. abstract def broadcast_not_equal(lhs: NDArray, rhs: NDArray, out: NDArray): Array[NDArray]

    Permalink

    Returns the result of element-wise **not equal to** (!=) comparison operation with broadcasting.
    
    Example::
    
       x = `[ [ 1.,  1.,  1.],
            [ 1.,  1.,  1.] ]
    
       y = `[ [ 0.],
            [ 1.] ]
    
       broadcast_not_equal(x, y) = `[ [ 1.,  1.,  1.],
                                    [ 0.,  0.,  0.] ]
    
    
    
    Defined in src/operator/tensor/elemwise_binary_broadcast_op_logic.cc:L63
    lhs

    First input to the function

    rhs

    Second input to the function

    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  79. abstract def broadcast_plus(lhs: NDArray, rhs: NDArray, out: NDArray): Array[NDArray]

    Permalink

    Returns element-wise sum of the input arrays with broadcasting.
    
    `broadcast_plus` is an alias to the function `broadcast_add`.
    
    Example::
    
       x = `[ [ 1.,  1.,  1.],
            [ 1.,  1.,  1.] ]
    
       y = `[ [ 0.],
            [ 1.] ]
    
       broadcast_add(x, y) = `[ [ 1.,  1.,  1.],
                              [ 2.,  2.,  2.] ]
    
       broadcast_plus(x, y) = `[ [ 1.,  1.,  1.],
                               [ 2.,  2.,  2.] ]
    
    Supported sparse operations:
    
       broadcast_add(csr, dense(1D)) = dense
       broadcast_add(dense(1D), csr) = dense
    
    
    
    Defined in src/operator/tensor/elemwise_binary_broadcast_op_basic.cc:L57
    lhs

    First input to the function

    rhs

    Second input to the function

    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  80. abstract def broadcast_power(lhs: NDArray, rhs: NDArray, out: NDArray): Array[NDArray]

    Permalink

    Returns result of first array elements raised to powers from second array, element-wise with broadcasting.
    
    Example::
    
       x = `[ [ 1.,  1.,  1.],
            [ 1.,  1.,  1.] ]
    
       y = `[ [ 0.],
            [ 1.] ]
    
       broadcast_power(x, y) = `[ [ 2.,  2.,  2.],
                                [ 4.,  4.,  4.] ]
    
    
    
    Defined in src/operator/tensor/elemwise_binary_broadcast_op_extended.cc:L44
    lhs

    First input to the function

    rhs

    Second input to the function

    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  81. abstract def broadcast_sub(lhs: NDArray, rhs: NDArray, out: NDArray): Array[NDArray]

    Permalink

    Returns element-wise difference of the input arrays with broadcasting.
    
    `broadcast_minus` is an alias to the function `broadcast_sub`.
    
    Example::
    
       x = `[ [ 1.,  1.,  1.],
            [ 1.,  1.,  1.] ]
    
       y = `[ [ 0.],
            [ 1.] ]
    
       broadcast_sub(x, y) = `[ [ 1.,  1.,  1.],
                              [ 0.,  0.,  0.] ]
    
       broadcast_minus(x, y) = `[ [ 1.,  1.,  1.],
                                [ 0.,  0.,  0.] ]
    
    Supported sparse operations:
    
       broadcast_sub/minus(csr, dense(1D)) = dense
       broadcast_sub/minus(dense(1D), csr) = dense
    
    
    
    Defined in src/operator/tensor/elemwise_binary_broadcast_op_basic.cc:L105
    lhs

    First input to the function

    rhs

    Second input to the function

    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  82. abstract def broadcast_to(data: NDArray, shape: Shape, out: NDArray): Array[NDArray]

    Permalink

    Broadcasts the input array to a new shape.
    
    Broadcasting is a mechanism that allows NDArrays to perform arithmetic operations
    with arrays of different shapes efficiently without creating multiple copies of arrays.
    Also see, `Broadcasting <https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html>`_ for more explanation.
    
    Broadcasting is allowed on axes with size 1, such as from `(2,1,3,1)` to
    `(2,8,3,9)`. Elements will be duplicated on the broadcasted axes.
    
    For example::
    
       broadcast_to(`[ [1,2,3] ], shape=(2,3)) = `[ [ 1.,  2.,  3.],
                                               [ 1.,  2.,  3.] ])
    
    The dimension which you do not want to change can also be kept as `0` which means copy the original value.
    So with `shape=(2,0)`, we will obtain the same result as in the above example.
    
    
    
    Defined in src/operator/tensor/broadcast_reduce_op_value.cc:L116
    data

    The input

    shape

    The shape of the desired array. We can set the dim to zero if it's same as the original. E.g A = broadcast_to(B, shape=(10, 0, 0)) has the same meaning as A = broadcast_axis(B, axis=0, size=10).

    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  83. abstract def cast(data: NDArray, dtype: String, out: NDArray): Array[NDArray]

    Permalink

    Casts all elements of the input to a new type.
    
    .. note:: ``Cast`` is deprecated. Use ``cast`` instead.
    
    Example::
    
       cast([0.9, 1.3], dtype='int32') = [0, 1]
       cast([1e20, 11.1], dtype='float16') = [inf, 11.09375]
       cast([300, 11.1, 10.9, -1, -3], dtype='uint8') = [44, 11, 10, 255, 253]
    
    
    
    Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L664
    data

    The input.

    dtype

    Output data type.

    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  84. abstract def cast_storage(data: NDArray, stype: String, out: NDArray): Array[NDArray]

    Permalink

    Casts tensor storage type to the new type.
    
    When an NDArray with default storage type is cast to csr or row_sparse storage,
    the result is compact, which means:
    
    - for csr, zero values will not be retained
    - for row_sparse, row slices of all zeros will not be retained
    
    The storage type of ``cast_storage`` output depends on stype parameter:
    
    - cast_storage(csr, 'default') = default
    - cast_storage(row_sparse, 'default') = default
    - cast_storage(default, 'csr') = csr
    - cast_storage(default, 'row_sparse') = row_sparse
    - cast_storage(csr, 'csr') = csr
    - cast_storage(row_sparse, 'row_sparse') = row_sparse
    
    Example::
    
        dense = `[ [ 0.,  1.,  0.],
                 [ 2.,  0.,  3.],
                 [ 0.,  0.,  0.],
                 [ 0.,  0.,  0.] ]
    
        # cast to row_sparse storage type
        rsp = cast_storage(dense, 'row_sparse')
        rsp.indices = [0, 1]
        rsp.values = `[ [ 0.,  1.,  0.],
                      [ 2.,  0.,  3.] ]
    
        # cast to csr storage type
        csr = cast_storage(dense, 'csr')
        csr.indices = [1, 0, 2]
        csr.values = [ 1.,  2.,  3.]
        csr.indptr = [0, 1, 3, 3, 3]
    
    
    
    Defined in src/operator/tensor/cast_storage.cc:L71
    data

    The input.

    stype

    Output storage type.

    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  85. abstract def cbrt(data: NDArray, out: NDArray): Array[NDArray]

    Permalink

    Returns element-wise cube-root value of the input.
    
    .. math::
       cbrt(x) = \sqrt[3]{x}
    
    Example::
    
       cbrt([1, 8, -125]) = [1, 2, -5]
    
    The storage type of ``cbrt`` output depends upon the input storage type:
    
       - cbrt(default) = default
       - cbrt(row_sparse) = row_sparse
       - cbrt(csr) = csr
    
    
    
    Defined in src/operator/tensor/elemwise_unary_op_pow.cc:L270
    data

    The input array.

    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  86. abstract def ceil(data: NDArray, out: NDArray): Array[NDArray]

    Permalink

    Returns element-wise ceiling of the input.
    
    The ceil of the scalar x is the smallest integer i, such that i >= x.
    
    Example::
    
       ceil([-2.1, -1.9, 1.5, 1.9, 2.1]) = [-2., -1.,  2.,  2.,  3.]
    
    The storage type of ``ceil`` output depends upon the input storage type:
    
       - ceil(default) = default
       - ceil(row_sparse) = row_sparse
       - ceil(csr) = csr
    
    
    
    Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L817
    data

    The input array.

    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  87. abstract def choose_element_0index(po: choose_element_0indexParam): Array[NDArray]

    Permalink

    Picks elements from an input array according to the input indices along the given axis.
    
    Given an input array of shape ``(d0, d1)`` and indices of shape ``(i0,)``, the result will be
    an output array of shape ``(i0,)`` with::
    
      output[i] = input[i, indices[i] ]
    
    By default, if any index mentioned is too large, it is replaced by the index that addresses
    the last element along an axis (the `clip` mode).
    
    This function supports n-dimensional input and (n-1)-dimensional indices arrays.
    
    Examples::
    
      x = `[ [ 1.,  2.],
           [ 3.,  4.],
           [ 5.,  6.] ]
    
      // picks elements with specified indices along axis 0
      pick(x, y=[0,1], 0) = [ 1.,  4.]
    
      // picks elements with specified indices along axis 1
      pick(x, y=[0,1,0], 1) = [ 1.,  4.,  5.]
    
      // picks elements with specified indices along axis 1 using 'wrap' mode
      // to place indicies that would normally be out of bounds
      pick(x, y=[2,-1,-2], 1, mode='wrap') = [ 1.,  4.,  5.]
    
      y = `[ [ 1.],
           [ 0.],
           [ 2.] ]
    
      // picks elements with specified indices along axis 1 and dims are maintained
      pick(x, y, 1, keepdims=True) = `[ [ 2.],
                                     [ 3.],
                                     [ 6.] ]
    
    
    
    Defined in src/operator/tensor/broadcast_reduce_op_index.cc:L150
    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  88. abstract def clip(data: NDArray, a_min: Float, a_max: Float, out: NDArray): Array[NDArray]

    Permalink

    Clips (limits) the values in an array.
    Given an interval, values outside the interval are clipped to the interval edges.
    Clipping ``x`` between `a_min` and `a_max` would be::
    .. math::
       clip(x, a_min, a_max) = \max(\min(x, a_max), a_min))
    Example::
        x = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
        clip(x,1,8) = [ 1.,  1.,  2.,  3.,  4.,  5.,  6.,  7.,  8.,  8.]
    The storage type of ``clip`` output depends on storage types of inputs and the a_min, a_max \
    parameter values:
       - clip(default) = default
       - clip(row_sparse, a_min <= 0, a_max >= 0) = row_sparse
       - clip(csr, a_min <= 0, a_max >= 0) = csr
       - clip(row_sparse, a_min < 0, a_max < 0) = default
       - clip(row_sparse, a_min > 0, a_max > 0) = default
       - clip(csr, a_min < 0, a_max < 0) = csr
       - clip(csr, a_min > 0, a_max > 0) = csr
    
    
    Defined in src/operator/tensor/matrix_op.cc:L676
    data

    Input array.

    a_min

    Minimum value

    a_max

    Maximum value

    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  89. abstract def col2im(po: col2imParam): Array[NDArray]

    Permalink

    Combining the output column matrix of im2col back to image array.
    
    Like :class:`~mxnet.ndarray.im2col`, this operator is also used in the vanilla convolution
    implementation. Despite the name, col2im is not the reverse operation of im2col. Since there
    may be overlaps between neighbouring sliding blocks, the column elements cannot be directly
    put back into image. Instead, they are accumulated (i.e., summed) in the input image
    just like the gradient computation, so col2im is the gradient of im2col and vice versa.
    
    Using the notation in im2col, given an input column array of shape
    :math:`(N, C \times  \prod(\text{kernel}), W)`, this operator accumulates the column elements
    into output array of shape :math:`(N, C, \text{output_size}[0], \text{output_size}[1], \dots)`.
    Only 1-D, 2-D and 3-D of spatial dimension is supported in this operator.
    
    
    
    Defined in src/operator/nn/im2col.cc:L181
    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  90. abstract def concat(data: Array[NDArray], num_args: Integer, dim: Integer, out: NDArray): Array[NDArray]

    Permalink

    Joins input arrays along a given axis.
    
    .. note:: `Concat` is deprecated. Use `concat` instead.
    
    The dimensions of the input arrays should be the same except the axis along
    which they will be concatenated.
    The dimension of the output array along the concatenated axis will be equal
    to the sum of the corresponding dimensions of the input arrays.
    
    The storage type of ``concat`` output depends on storage types of inputs
    
    - concat(csr, csr, ..., csr, dim=0) = csr
    - otherwise, ``concat`` generates output with default storage
    
    Example::
    
       x = `[ [1,1],[2,2] ]
       y = `[ [3,3],[4,4],[5,5] ]
       z = `[ [6,6], [7,7],[8,8] ]
    
       concat(x,y,z,dim=0) = `[ [ 1.,  1.],
                              [ 2.,  2.],
                              [ 3.,  3.],
                              [ 4.,  4.],
                              [ 5.,  5.],
                              [ 6.,  6.],
                              [ 7.,  7.],
                              [ 8.,  8.] ]
    
       Note that you cannot concat x,y,z along dimension 1 since dimension
       0 is not the same for all the input arrays.
    
       concat(y,z,dim=1) = `[ [ 3.,  3.,  6.,  6.],
                             [ 4.,  4.,  7.,  7.],
                             [ 5.,  5.,  8.,  8.] ]
    
    
    
    Defined in src/operator/nn/concat.cc:L384
    data

    List of arrays to concatenate

    num_args

    Number of inputs to be concated.

    dim

    the dimension to be concated.

    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  91. abstract def cos(data: NDArray, out: NDArray): Array[NDArray]

    Permalink

    Computes the element-wise cosine of the input array.
    
    The input should be in radians (:math:`2\pi` rad equals 360 degrees).
    
    .. math::
       cos([0, \pi/4, \pi/2]) = [1, 0.707, 0]
    
    The storage type of ``cos`` output is always dense
    
    
    
    Defined in src/operator/tensor/elemwise_unary_op_trig.cc:L90
    data

    The input array.

    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  92. abstract def cosh(data: NDArray, out: NDArray): Array[NDArray]

    Permalink

    Returns the hyperbolic cosine  of the input array, computed element-wise.
    
    .. math::
       cosh(x) = 0.5\times(exp(x) + exp(-x))
    
    The storage type of ``cosh`` output is always dense
    
    
    
    Defined in src/operator/tensor/elemwise_unary_op_trig.cc:L409
    data

    The input array.

    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  93. abstract def crop(data: NDArray, begin: Shape, end: Shape, step: Shape, out: NDArray): Array[NDArray]

    Permalink

    Slices a region of the array.
    .. note:: ``crop`` is deprecated. Use ``slice`` instead.
    This function returns a sliced array between the indices given
    by `begin` and `end` with the corresponding `step`.
    For an input array of ``shape=(d_0, d_1, ..., d_n-1)``,
    slice operation with ``begin=(b_0, b_1...b_m-1)``,
    ``end=(e_0, e_1, ..., e_m-1)``, and ``step=(s_0, s_1, ..., s_m-1)``,
    where m <= n, results in an array with the shape
    ``(|e_0-b_0|/|s_0|, ..., |e_m-1-b_m-1|/|s_m-1|, d_m, ..., d_n-1)``.
    The resulting array's *k*-th dimension contains elements
    from the *k*-th dimension of the input array starting
    from index ``b_k`` (inclusive) with step ``s_k``
    until reaching ``e_k`` (exclusive).
    If the *k*-th elements are `None` in the sequence of `begin`, `end`,
    and `step`, the following rule will be used to set default values.
    If `s_k` is `None`, set `s_k=1`. If `s_k > 0`, set `b_k=0`, `e_k=d_k`;
    else, set `b_k=d_k-1`, `e_k=-1`.
    The storage type of ``slice`` output depends on storage types of inputs
    - slice(csr) = csr
    - otherwise, ``slice`` generates output with default storage
    .. note:: When input data storage type is csr, it only supports
       step=(), or step=(None,), or step=(1,) to generate a csr output.
       For other step parameter values, it falls back to slicing
       a dense tensor.
    Example::
      x = `[ [  1.,   2.,   3.,   4.],
           [  5.,   6.,   7.,   8.],
           [  9.,  10.,  11.,  12.] ]
      slice(x, begin=(0,1), end=(2,4)) = `[ [ 2.,  3.,  4.],
                                         [ 6.,  7.,  8.] ]
      slice(x, begin=(None, 0), end=(None, 3), step=(-1, 2)) = `[ [9., 11.],
                                                                [5.,  7.],
                                                                [1.,  3.] ]
    
    
    Defined in src/operator/tensor/matrix_op.cc:L481
    data

    Source input

    begin

    starting indices for the slice operation, supports negative indices.

    end

    ending indices for the slice operation, supports negative indices.

    step

    step for the slice operation, supports negative values.

    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  94. abstract def ctc_loss(po: ctc_lossParam): Array[NDArray]

    Permalink

    Connectionist Temporal Classification Loss.
    
    .. note:: The existing alias ``contrib_CTCLoss`` is deprecated.
    
    The shapes of the inputs and outputs:
    
    - **data**: `(sequence_length, batch_size, alphabet_size)`
    - **label**: `(batch_size, label_sequence_length)`
    - **out**: `(batch_size)`
    
    The `data` tensor consists of sequences of activation vectors (without applying softmax),
    with i-th channel in the last dimension corresponding to i-th label
    for i between 0 and alphabet_size-1 (i.e always 0-indexed).
    Alphabet size should include one additional value reserved for blank label.
    When `blank_label` is ``"first"``, the ``0``-th channel is be reserved for
    activation of blank label, or otherwise if it is "last", ``(alphabet_size-1)``-th channel should be
    reserved for blank label.
    
    ``label`` is an index matrix of integers. When `blank_label` is ``"first"``,
    the value 0 is then reserved for blank label, and should not be passed in this matrix. Otherwise,
    when `blank_label` is ``"last"``, the value `(alphabet_size-1)` is reserved for blank label.
    
    If a sequence of labels is shorter than *label_sequence_length*, use the special
    padding value at the end of the sequence to conform it to the correct
    length. The padding value is `0` when `blank_label` is ``"first"``, and `-1` otherwise.
    
    For example, suppose the vocabulary is `[a, b, c]`, and in one batch we have three sequences
    'ba', 'cbb', and 'abac'. When `blank_label` is ``"first"``, we can index the labels as
    `{'a': 1, 'b': 2, 'c': 3}`, and we reserve the 0-th channel for blank label in data tensor.
    The resulting `label` tensor should be padded to be::
    
      `[ [2, 1, 0, 0], [3, 2, 2, 0], [1, 2, 1, 3] ]
    
    When `blank_label` is ``"last"``, we can index the labels as
    `{'a': 0, 'b': 1, 'c': 2}`, and we reserve the channel index 3 for blank label in data tensor.
    The resulting `label` tensor should be padded to be::
    
      `[ [1, 0, -1, -1], [2, 1, 1, -1], [0, 1, 0, 2] ]
    
    ``out`` is a list of CTC loss values, one per example in the batch.
    
    See *Connectionist Temporal Classification: Labelling Unsegmented
    Sequence Data with Recurrent Neural Networks*, A. Graves *et al*. for more
    information on the definition and the algorithm.
    
    
    
    Defined in src/operator/nn/ctc_loss.cc:L100
    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  95. abstract def cumsum(po: cumsumParam): Array[NDArray]

    Permalink

    Return the cumulative sum of the elements along a given axis.
    
    Defined in src/operator/numpy/np_cumsum.cc:L70
    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  96. abstract def degrees(data: NDArray, out: NDArray): Array[NDArray]

    Permalink

    Converts each element of the input array from radians to degrees.
    
    .. math::
       degrees([0, \pi/2, \pi, 3\pi/2, 2\pi]) = [0, 90, 180, 270, 360]
    
    The storage type of ``degrees`` output depends upon the input storage type:
    
       - degrees(default) = default
       - degrees(row_sparse) = row_sparse
       - degrees(csr) = csr
    
    
    
    Defined in src/operator/tensor/elemwise_unary_op_trig.cc:L332
    data

    The input array.

    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  97. abstract def depth_to_space(data: NDArray, block_size: Integer, out: NDArray): Array[NDArray]

    Permalink

    Rearranges(permutes) data from depth into blocks of spatial data.
    Similar to ONNX DepthToSpace operator:
    https://github.com/onnx/onnx/blob/master/docs/Operators.md#DepthToSpace.
    The output is a new tensor where the values from depth dimension are moved in spatial blocks
    to height and width dimension. The reverse of this operation is ``space_to_depth``.
    .. math::
        \begin{gather*}
        x \prime = reshape(x, [N, block\_size, block\_size, C / (block\_size ^ 2), H * block\_size, W * block\_size]) \\
        x \prime \prime = transpose(x \prime, [0, 3, 4, 1, 5, 2]) \\
        y = reshape(x \prime \prime, [N, C / (block\_size ^ 2), H * block\_size, W * block\_size])
        \end{gather*}
    where :math:`x` is an input tensor with default layout as :math:`[N, C, H, W]`: [batch, channels, height, width]
    and :math:`y` is the output tensor of layout :math:`[N, C / (block\_size ^ 2), H * block\_size, W * block\_size]`
    Example::
      x = `[ [`[ [0, 1, 2],
             [3, 4, 5] ],
            `[ [6, 7, 8],
             [9, 10, 11] ],
            `[ [12, 13, 14],
             [15, 16, 17] ],
            `[ [18, 19, 20],
             [21, 22, 23] ] ] ]
      depth_to_space(x, 2) = `[ [`[ [0, 6, 1, 7, 2, 8],
                                [12, 18, 13, 19, 14, 20],
                                [3, 9, 4, 10, 5, 11],
                                [15, 21, 16, 22, 17, 23] ] ] ]
    
    
    Defined in src/operator/tensor/matrix_op.cc:L971
    data

    Input ndarray

    block_size

    Blocks of [block_size. block_size] are moved

    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  98. abstract def diag(po: diagParam): Array[NDArray]

    Permalink

    Extracts a diagonal or constructs a diagonal array.
    
    ``diag``'s behavior depends on the input array dimensions:
    
    - 1-D arrays: constructs a 2-D array with the input as its diagonal, all other elements are zero.
    - N-D arrays: extracts the diagonals of the sub-arrays with axes specified by ``axis1`` and ``axis2``.
      The output shape would be decided by removing the axes numbered ``axis1`` and ``axis2`` from the
      input shape and appending to the result a new axis with the size of the diagonals in question.
    
      For example, when the input shape is `(2, 3, 4, 5)`, ``axis1`` and ``axis2`` are 0 and 2
      respectively and ``k`` is 0, the resulting shape would be `(3, 5, 2)`.
    
    Examples::
    
      x = `[ [1, 2, 3],
           [4, 5, 6] ]
    
      diag(x) = [1, 5]
    
      diag(x, k=1) = [2, 6]
    
      diag(x, k=-1) = [4]
    
      x = [1, 2, 3]
    
      diag(x) = `[ [1, 0, 0],
                 [0, 2, 0],
                 [0, 0, 3] ]
    
      diag(x, k=1) = `[ [0, 1, 0],
                      [0, 0, 2],
                      [0, 0, 0] ]
    
      diag(x, k=-1) = `[ [0, 0, 0],
                       [1, 0, 0],
                       [0, 2, 0] ]
    
      x = `[ `[ [1, 2],
            [3, 4] ],
    
           `[ [5, 6],
            [7, 8] ] ]
    
      diag(x) = `[ [1, 7],
                 [2, 8] ]
    
      diag(x, k=1) = `[ [3],
                      [4] ]
    
      diag(x, axis1=-2, axis2=-1) = `[ [1, 4],
                                     [5, 8] ]
    
    
    
    Defined in src/operator/tensor/diag_op.cc:L86
    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  99. abstract def dot(po: dotParam): Array[NDArray]

    Permalink

    Dot product of two arrays.
    
    ``dot``'s behavior depends on the input array dimensions:
    
    - 1-D arrays: inner product of vectors
    - 2-D arrays: matrix multiplication
    - N-D arrays: a sum product over the last axis of the first input and the first
      axis of the second input
    
      For example, given 3-D ``x`` with shape `(n,m,k)` and ``y`` with shape `(k,r,s)`, the
      result array will have shape `(n,m,r,s)`. It is computed by::
    
        dot(x,y)[i,j,a,b] = sum(x[i,j,:]*y[:,a,b])
    
      Example::
    
        x = reshape([0,1,2,3,4,5,6,7], shape=(2,2,2))
        y = reshape([7,6,5,4,3,2,1,0], shape=(2,2,2))
        dot(x,y)[0,0,1,1] = 0
        sum(x[0,0,:]*y[:,1,1]) = 0
    
    The storage type of ``dot`` output depends on storage types of inputs, transpose option and
    forward_stype option for output storage type. Implemented sparse operations include:
    
    - dot(default, default, transpose_a=True/False, transpose_b=True/False) = default
    - dot(csr, default, transpose_a=True) = default
    - dot(csr, default, transpose_a=True) = row_sparse
    - dot(csr, default) = default
    - dot(csr, row_sparse) = default
    - dot(default, csr) = csr (CPU only)
    - dot(default, csr, forward_stype='default') = default
    - dot(default, csr, transpose_b=True, forward_stype='default') = default
    
    If the combination of input storage types and forward_stype does not match any of the
    above patterns, ``dot`` will fallback and generate output with default storage.
    
    .. Note::
    
        If the storage type of the lhs is "csr", the storage type of gradient w.r.t rhs will be
        "row_sparse". Only a subset of optimizers support sparse gradients, including SGD, AdaGrad
        and Adam. Note that by default lazy updates is turned on, which may perform differently
        from standard updates. For more details, please check the Optimization API at:
        https://mxnet.incubator.apache.org/api/python/optimization/optimization.html
    
    
    
    Defined in src/operator/tensor/dot.cc:L77
    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  100. abstract def elemwise_add(lhs: NDArray, rhs: NDArray, out: NDArray): Array[NDArray]

    Permalink

    Adds arguments element-wise.
    
    The storage type of ``elemwise_add`` output depends on storage types of inputs
    
       - elemwise_add(row_sparse, row_sparse) = row_sparse
       - elemwise_add(csr, csr) = csr
       - elemwise_add(default, csr) = default
       - elemwise_add(csr, default) = default
       - elemwise_add(default, rsp) = default
       - elemwise_add(rsp, default) = default
       - otherwise, ``elemwise_add`` generates output with default storage
    lhs

    first input

    rhs

    second input

    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  101. abstract def elemwise_div(lhs: NDArray, rhs: NDArray, out: NDArray): Array[NDArray]

    Permalink

    Divides arguments element-wise.
    
    The storage type of ``elemwise_div`` output is always dense
    lhs

    first input

    rhs

    second input

    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  102. abstract def elemwise_mul(lhs: NDArray, rhs: NDArray, out: NDArray): Array[NDArray]

    Permalink

    Multiplies arguments element-wise.
    
    The storage type of ``elemwise_mul`` output depends on storage types of inputs
    
       - elemwise_mul(default, default) = default
       - elemwise_mul(row_sparse, row_sparse) = row_sparse
       - elemwise_mul(default, row_sparse) = row_sparse
       - elemwise_mul(row_sparse, default) = row_sparse
       - elemwise_mul(csr, csr) = csr
       - otherwise, ``elemwise_mul`` generates output with default storage
    lhs

    first input

    rhs

    second input

    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  103. abstract def elemwise_sub(lhs: NDArray, rhs: NDArray, out: NDArray): Array[NDArray]

    Permalink

    Subtracts arguments element-wise.
    
    The storage type of ``elemwise_sub`` output depends on storage types of inputs
    
       - elemwise_sub(row_sparse, row_sparse) = row_sparse
       - elemwise_sub(csr, csr) = csr
       - elemwise_sub(default, csr) = default
       - elemwise_sub(csr, default) = default
       - elemwise_sub(default, rsp) = default
       - elemwise_sub(rsp, default) = default
       - otherwise, ``elemwise_sub`` generates output with default storage
    lhs

    first input

    rhs

    second input

    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  104. abstract def erf(data: NDArray, out: NDArray): Array[NDArray]

    Permalink

    Returns element-wise gauss error function of the input.
    
    Example::
    
       erf([0, -1., 10.]) = [0., -0.8427, 1.]
    
    
    
    Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L886
    data

    The input array.

    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  105. abstract def erfinv(data: NDArray, out: NDArray): Array[NDArray]

    Permalink

    Returns element-wise inverse gauss error function of the input.
    
    Example::
    
       erfinv([0, 0.5., -1.]) = [0., 0.4769, -inf]
    
    
    
    Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L908
    data

    The input array.

    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  106. abstract def exp(data: NDArray, out: NDArray): Array[NDArray]

    Permalink

    Returns element-wise exponential value of the input.
    
    .. math::
       exp(x) = e^x \approx 2.718^x
    
    Example::
    
       exp([0, 1, 2]) = [1., 2.71828175, 7.38905621]
    
    The storage type of ``exp`` output is always dense
    
    
    
    Defined in src/operator/tensor/elemwise_unary_op_logexp.cc:L64
    data

    The input array.

    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  107. abstract def expand_dims(data: NDArray, axis: Integer, out: NDArray): Array[NDArray]

    Permalink

    Inserts a new axis of size 1 into the array shape
    For example, given ``x`` with shape ``(2,3,4)``, then ``expand_dims(x, axis=1)``
    will return a new array with shape ``(2,1,3,4)``.
    
    
    Defined in src/operator/tensor/matrix_op.cc:L394
    data

    Source input

    axis

    Position where new axis is to be inserted. Suppose that the input NDArray's dimension is ndim, the range of the inserted axis is [-ndim, ndim]

    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  108. abstract def expm1(data: NDArray, out: NDArray): Array[NDArray]

    Permalink

    Returns ``exp(x) - 1`` computed element-wise on the input.
    
    This function provides greater precision than ``exp(x) - 1`` for small values of ``x``.
    
    The storage type of ``expm1`` output depends upon the input storage type:
    
       - expm1(default) = default
       - expm1(row_sparse) = row_sparse
       - expm1(csr) = csr
    
    
    
    Defined in src/operator/tensor/elemwise_unary_op_logexp.cc:L244
    data

    The input array.

    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  109. abstract def fill_element_0index(lhs: NDArray, mhs: NDArray, rhs: NDArray, out: NDArray): Array[NDArray]

    Permalink

    Fill one element of each line(row for python, column for R/Julia) in lhs according to index indicated by rhs and values indicated by mhs. This function assume rhs uses 0-based index.
    lhs

    Left operand to the function.

    mhs

    Middle operand to the function.

    rhs

    Right operand to the function.

    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  110. abstract def fix(data: NDArray, out: NDArray): Array[NDArray]

    Permalink

    Returns element-wise rounded value to the nearest \
    integer towards zero of the input.
    
    Example::
    
       fix([-2.1, -1.9, 1.9, 2.1]) = [-2., -1.,  1., 2.]
    
    The storage type of ``fix`` output depends upon the input storage type:
    
       - fix(default) = default
       - fix(row_sparse) = row_sparse
       - fix(csr) = csr
    
    
    
    Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L874
    data

    The input array.

    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  111. abstract def flatten(data: NDArray, out: NDArray): Array[NDArray]

    Permalink

    Flattens the input array into a 2-D array by collapsing the higher dimensions.
    .. note:: `Flatten` is deprecated. Use `flatten` instead.
    For an input array with shape ``(d1, d2, ..., dk)``, `flatten` operation reshapes
    the input array into an output array of shape ``(d1, d2*...*dk)``.
    Note that the behavior of this function is different from numpy.ndarray.flatten,
    which behaves similar to mxnet.ndarray.reshape((-1,)).
    Example::
        x = `[ [
            [1,2,3],
            [4,5,6],
            [7,8,9]
        ],
        [    [1,2,3],
            [4,5,6],
            [7,8,9]
        ] ],
        flatten(x) = `[ [ 1.,  2.,  3.,  4.,  5.,  6.,  7.,  8.,  9.],
           [ 1.,  2.,  3.,  4.,  5.,  6.,  7.,  8.,  9.] ]
    
    
    Defined in src/operator/tensor/matrix_op.cc:L249
    data

    Input array.

    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  112. abstract def flip(data: NDArray, axis: Shape, out: NDArray): Array[NDArray]

    Permalink

    Reverses the order of elements along given axis while preserving array shape.
    Note: reverse and flip are equivalent. We use reverse in the following examples.
    Examples::
      x = `[ [ 0.,  1.,  2.,  3.,  4.],
           [ 5.,  6.,  7.,  8.,  9.] ]
      reverse(x, axis=0) = `[ [ 5.,  6.,  7.,  8.,  9.],
                            [ 0.,  1.,  2.,  3.,  4.] ]
      reverse(x, axis=1) = `[ [ 4.,  3.,  2.,  1.,  0.],
                            [ 9.,  8.,  7.,  6.,  5.] ]
    
    
    Defined in src/operator/tensor/matrix_op.cc:L831
    data

    Input data array

    axis

    The axis which to reverse elements.

    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  113. abstract def floor(data: NDArray, out: NDArray): Array[NDArray]

    Permalink

    Returns element-wise floor of the input.
    
    The floor of the scalar x is the largest integer i, such that i <= x.
    
    Example::
    
       floor([-2.1, -1.9, 1.5, 1.9, 2.1]) = [-3., -2.,  1.,  1.,  2.]
    
    The storage type of ``floor`` output depends upon the input storage type:
    
       - floor(default) = default
       - floor(row_sparse) = row_sparse
       - floor(csr) = csr
    
    
    
    Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L836
    data

    The input array.

    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  114. abstract def ftml_update(po: ftml_updateParam): Array[NDArray]

    Permalink

    The FTML optimizer described in
    *FTML - Follow the Moving Leader in Deep Learning*,
    available at http://proceedings.mlr.press/v70/zheng17a/zheng17a.pdf.
    
    .. math::
    
     g_t = \nabla J(W_{t-1})\\
     v_t = \beta_2 v_{t-1} + (1 - \beta_2) g_t^2\\
     d_t = \frac{ 1 - \beta_1^t }{ \eta_t } (\sqrt{ \frac{ v_t }{ 1 - \beta_2^t } } + \epsilon)
     \sigma_t = d_t - \beta_1 d_{t-1}
     z_t = \beta_1 z_{ t-1 } + (1 - \beta_1^t) g_t - \sigma_t W_{t-1}
     W_t = - \frac{ z_t }{ d_t }
    
    
    
    Defined in src/operator/optimizer_op.cc:L639
    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  115. abstract def ftrl_update(po: ftrl_updateParam): Array[NDArray]

    Permalink

    Update function for Ftrl optimizer.
    Referenced from *Ad Click Prediction: a View from the Trenches*, available at
    http://dl.acm.org/citation.cfm?id=2488200.
    
    It updates the weights using::
    
     rescaled_grad = clip(grad * rescale_grad, clip_gradient)
     z += rescaled_grad - (sqrt(n + rescaled_grad**2) - sqrt(n)) * weight / learning_rate
     n += rescaled_grad**2
     w = (sign(z) * lamda1 - z) / ((beta + sqrt(n)) / learning_rate + wd) * (abs(z) > lamda1)
    
    If w, z and n are all of ``row_sparse`` storage type,
    only the row slices whose indices appear in grad.indices are updated (for w, z and n)::
    
     for row in grad.indices:
         rescaled_grad[row] = clip(grad[row] * rescale_grad, clip_gradient)
         z[row] += rescaled_grad[row] - (sqrt(n[row] + rescaled_grad[row]**2) - sqrt(n[row])) * weight[row] / learning_rate
         n[row] += rescaled_grad[row]**2
         w[row] = (sign(z[row]) * lamda1 - z[row]) / ((beta + sqrt(n[row])) / learning_rate + wd) * (abs(z[row]) > lamda1)
    
    
    
    Defined in src/operator/optimizer_op.cc:L875
    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  116. abstract def gamma(data: NDArray, out: NDArray): Array[NDArray]

    Permalink

    Returns the gamma function (extension of the factorial function \
    to the reals), computed element-wise on the input array.
    
    The storage type of ``gamma`` output is always dense
    data

    The input array.

    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  117. abstract def gammaln(data: NDArray, out: NDArray): Array[NDArray]

    Permalink

    Returns element-wise log of the absolute value of the gamma function \
    of the input.
    
    The storage type of ``gammaln`` output is always dense
    data

    The input array.

    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  118. abstract def gather_nd(data: NDArray, indices: NDArray, out: NDArray): Array[NDArray]

    Permalink

    Gather elements or slices from `data` and store to a tensor whose
    shape is defined by `indices`.
    
    Given `data` with shape `(X_0, X_1, ..., X_{N-1})` and indices with shape
    `(M, Y_0, ..., Y_{K-1})`, the output will have shape `(Y_0, ..., Y_{K-1}, X_M, ..., X_{N-1})`,
    where `M <= N`. If `M == N`, output shape will simply be `(Y_0, ..., Y_{K-1})`.
    
    The elements in output is defined as follows::
    
      output[y_0, ..., y_{K-1}, x_M, ..., x_{N-1}] = data[indices[0, y_0, ..., y_{K-1}],
                                                          ...,
                                                          indices[M-1, y_0, ..., y_{K-1}],
                                                          x_M, ..., x_{N-1}]
    
    Examples::
    
      data = `[ [0, 1], [2, 3] ]
      indices = `[ [1, 1, 0], [0, 1, 0] ]
      gather_nd(data, indices) = [2, 3, 0]
    
      data = `[ `[ [1, 2], [3, 4] ], `[ [5, 6], [7, 8] ] ]
      indices = `[ [0, 1], [1, 0] ]
      gather_nd(data, indices) = `[ [3, 4], [5, 6] ]
    data

    data

    indices

    indices

    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  119. abstract def hard_sigmoid(po: hard_sigmoidParam): Array[NDArray]

    Permalink

    Computes hard sigmoid of x element-wise.
    
    .. math::
       y = max(0, min(1, alpha * x + beta))
    
    
    
    Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L161
    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  120. abstract def identity(data: NDArray, out: NDArray): Array[NDArray]

    Permalink

    Returns a copy of the input.
    
    From:src/operator/tensor/elemwise_unary_op_basic.cc:244
    data

    The input array.

    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  121. abstract def im2col(po: im2colParam): Array[NDArray]

    Permalink

    Extract sliding blocks from input array.
    
    This operator is used in vanilla convolution implementation to transform the sliding
    blocks on image to column matrix, then the convolution operation can be computed
    by matrix multiplication between column and convolution weight. Due to the close
    relation between im2col and convolution, the concept of **kernel**, **stride**,
    **dilate** and **pad** in this operator are inherited from convolution operation.
    
    Given the input data of shape :math:`(N, C, *)`, where :math:`N` is the batch size,
    :math:`C` is the channel size, and :math:`*` is the arbitrary spatial dimension,
    the output column array is always with shape :math:`(N, C \times \prod(\text{kernel}), W)`,
    where :math:`C \times \prod(\text{kernel})` is the block size, and :math:`W` is the
    block number which is the spatial size of the convolution output with same input parameters.
    Only 1-D, 2-D and 3-D of spatial dimension is supported in this operator.
    
    
    
    Defined in src/operator/nn/im2col.cc:L99
    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  122. abstract def khatri_rao(args: Array[NDArray], out: NDArray): Array[NDArray]

    Permalink

    Computes the Khatri-Rao product of the input matrices.
    
    Given a collection of :math:`n` input matrices,
    
    .. math::
       A_1 \in \mathbb{R}^{M_1 \times M}, \ldots, A_n \in \mathbb{R}^{M_n \times N},
    
    the (column-wise) Khatri-Rao product is defined as the matrix,
    
    .. math::
       X = A_1 \otimes \cdots \otimes A_n \in \mathbb{R}^{(M_1 \cdots M_n) \times N},
    
    where the :math:`k` th column is equal to the column-wise outer product
    :math:`{A_1}_k \otimes \cdots \otimes {A_n}_k` where :math:`{A_i}_k` is the kth
    column of the ith matrix.
    
    Example::
    
      >>> A = mx.nd.array(`[ [1, -1],
      >>>                  [2, -3] ])
      >>> B = mx.nd.array(`[ [1, 4],
      >>>                  [2, 5],
      >>>                  [3, 6] ])
      >>> C = mx.nd.khatri_rao(A, B)
      >>> print(C.asnumpy())
      `[ [  1.  -4.]
       [  2.  -5.]
       [  3.  -6.]
       [  2. -12.]
       [  4. -15.]
       [  6. -18.] ]
    
    
    
    Defined in src/operator/contrib/krprod.cc:L108
    args

    Positional input matrices

    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  123. abstract def lamb_update_phase1(po: lamb_update_phase1Param): Array[NDArray]

    Permalink

    Phase I of lamb update it performs the following operations and returns g:.
    
    Link to paper: https://arxiv.org/pdf/1904.00962.pdf
    
    .. math::
        \begin{gather*}
        grad = grad * rescale_grad
        if (grad < -clip_gradient)
        then
             grad = -clip_gradient
        if (grad > clip_gradient)
        then
             grad = clip_gradient
    
        mean = beta1 * mean + (1 - beta1) * grad;
        variance = beta2 * variance + (1. - beta2) * grad ^ 2;
    
        if (bias_correction)
        then
             mean_hat = mean / (1. - beta1^t);
             var_hat = var / (1 - beta2^t);
             g = mean_hat / (var_hat^(1/2) + epsilon) + wd * weight;
        else
             g = mean / (var_data^(1/2) + epsilon) + wd * weight;
        \end{gather*}
    
    
    
    Defined in src/operator/optimizer_op.cc:L952
    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  124. abstract def lamb_update_phase2(po: lamb_update_phase2Param): Array[NDArray]

    Permalink

    Phase II of lamb update it performs the following operations and updates grad.
    
    Link to paper: https://arxiv.org/pdf/1904.00962.pdf
    
    .. math::
        \begin{gather*}
        if (lower_bound >= 0)
        then
             r1 = max(r1, lower_bound)
        if (upper_bound >= 0)
        then
             r1 = max(r1, upper_bound)
    
        if (r1 == 0 or r2 == 0)
        then
             lr = lr
        else
             lr = lr * (r1/r2)
        weight = weight - lr * g
        \end{gather*}
    
    
    
    Defined in src/operator/optimizer_op.cc:L991
    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  125. abstract def linalg_det(A: NDArray, out: NDArray): Array[NDArray]

    Permalink

    Compute the determinant of a matrix.
    Input is a tensor *A* of dimension *n >= 2*.
    
    If *n=2*, *A* is a square matrix. We compute:
    
      *out* = *det(A)*
    
    If *n>2*, *det* is performed separately on the trailing two dimensions
    for all inputs (batch mode).
    
    .. note:: The operator supports float32 and float64 data types only.
    .. note:: There is no gradient backwarded when A is non-invertible (which is
              equivalent to det(A) = 0) because zero is rarely hit upon in float
              point computation and the Jacobi's formula on determinant gradient
              is not computationally efficient when A is non-invertible.
    
    Examples::
    
       Single matrix determinant
       A = `[ [1., 4.], [2., 3.] ]
       det(A) = [-5.]
    
       Batch matrix determinant
       A = `[ `[ [1., 4.], [2., 3.] ],
            `[ [2., 3.], [1., 4.] ] ]
       det(A) = [-5., 5.]
    
    
    Defined in src/operator/tensor/la_op.cc:L974
    A

    Tensor of square matrix

    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  126. abstract def linalg_extractdiag(A: NDArray, offset: Integer, out: NDArray): Array[NDArray]

    Permalink

    Extracts the diagonal entries of a square matrix.
    Input is a tensor *A* of dimension *n >= 2*.
    
    If *n=2*, then *A* represents a single square matrix which diagonal elements get extracted as a 1-dimensional tensor.
    
    If *n>2*, then *A* represents a batch of square matrices on the trailing two dimensions. The extracted diagonals are returned as an *n-1*-dimensional tensor.
    
    .. note:: The operator supports float32 and float64 data types only.
    
    Examples::
    
        Single matrix diagonal extraction
        A = `[ [1.0, 2.0],
             [3.0, 4.0] ]
    
        extractdiag(A) = [1.0, 4.0]
    
        extractdiag(A, 1) = [2.0]
    
        Batch matrix diagonal extraction
        A = `[ `[ [1.0, 2.0],
              [3.0, 4.0] ],
             `[ [5.0, 6.0],
              [7.0, 8.0] ] ]
    
        extractdiag(A) = `[ [1.0, 4.0],
                          [5.0, 8.0] ]
    
    
    Defined in src/operator/tensor/la_op.cc:L494
    A

    Tensor of square matrices

    offset

    Offset of the diagonal versus the main diagonal. 0 corresponds to the main diagonal, a negative/positive value to diagonals below/above the main diagonal.

    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  127. abstract def linalg_extracttrian(po: linalg_extracttrianParam): Array[NDArray]

    Permalink

    Extracts a triangular sub-matrix from a square matrix.
    Input is a tensor *A* of dimension *n >= 2*.
    
    If *n=2*, then *A* represents a single square matrix from which a triangular sub-matrix is extracted as a 1-dimensional tensor.
    
    If *n>2*, then *A* represents a batch of square matrices on the trailing two dimensions. The extracted triangular sub-matrices are returned as an *n-1*-dimensional tensor.
    
    The *offset* and *lower* parameters determine the triangle to be extracted:
    
    - When *offset = 0* either the lower or upper triangle with respect to the main diagonal is extracted depending on the value of parameter *lower*.
    - When *offset = k > 0* the upper triangle with respect to the k-th diagonal above the main diagonal is extracted.
    - When *offset = k < 0* the lower triangle with respect to the k-th diagonal below the main diagonal is extracted.
    
    .. note:: The operator supports float32 and float64 data types only.
    
    Examples::
    
        Single triagonal extraction
        A = `[ [1.0, 2.0],
             [3.0, 4.0] ]
    
        extracttrian(A) = [1.0, 3.0, 4.0]
        extracttrian(A, lower=False) = [1.0, 2.0, 4.0]
        extracttrian(A, 1) = [2.0]
        extracttrian(A, -1) = [3.0]
    
        Batch triagonal extraction
        A = `[ `[ [1.0, 2.0],
              [3.0, 4.0] ],
             `[ [5.0, 6.0],
              [7.0, 8.0] ] ]
    
        extracttrian(A) = `[ [1.0, 3.0, 4.0],
                           [5.0, 7.0, 8.0] ]
    
    
    Defined in src/operator/tensor/la_op.cc:L604
    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  128. abstract def linalg_gelqf(A: NDArray, out: NDArray): Array[NDArray]

    Permalink

    LQ factorization for general matrix.
    Input is a tensor *A* of dimension *n >= 2*.
    
    If *n=2*, we compute the LQ factorization (LAPACK *gelqf*, followed by *orglq*). *A*
    must have shape *(x, y)* with *x <= y*, and must have full rank *=x*. The LQ
    factorization consists of *L* with shape *(x, x)* and *Q* with shape *(x, y)*, so
    that:
    
       *A* = *L* \* *Q*
    
    Here, *L* is lower triangular (upper triangle equal to zero) with nonzero diagonal,
    and *Q* is row-orthonormal, meaning that
    
       *Q* \* *Q*\ :sup:`T`
    
    is equal to the identity matrix of shape *(x, x)*.
    
    If *n>2*, *gelqf* is performed separately on the trailing two dimensions for all
    inputs (batch mode).
    
    .. note:: The operator supports float32 and float64 data types only.
    
    Examples::
    
       Single LQ factorization
       A = `[ [1., 2., 3.], [4., 5., 6.] ]
       Q, L = gelqf(A)
       Q = `[ [-0.26726124, -0.53452248, -0.80178373],
            [0.87287156, 0.21821789, -0.43643578] ]
       L = `[ [-3.74165739, 0.],
            [-8.55235974, 1.96396101] ]
    
       Batch LQ factorization
       A = `[ `[ [1., 2., 3.], [4., 5., 6.] ],
            `[ [7., 8., 9.], [10., 11., 12.] ] ]
       Q, L = gelqf(A)
       Q = `[ `[ [-0.26726124, -0.53452248, -0.80178373],
             [0.87287156, 0.21821789, -0.43643578] ],
            `[ [-0.50257071, -0.57436653, -0.64616234],
             [0.7620735, 0.05862104, -0.64483142] ] ]
       L = `[ `[ [-3.74165739, 0.],
             [-8.55235974, 1.96396101] ],
            `[ [-13.92838828, 0.],
             [-19.09768702, 0.52758934] ] ]
    
    
    Defined in src/operator/tensor/la_op.cc:L797
    A

    Tensor of input matrices to be factorized

    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  129. abstract def linalg_gemm(po: linalg_gemmParam): Array[NDArray]

    Permalink

    Performs general matrix multiplication and accumulation.
    Input are tensors *A*, *B*, *C*, each of dimension *n >= 2* and having the same shape
    on the leading *n-2* dimensions.
    
    If *n=2*, the BLAS3 function *gemm* is performed:
    
       *out* = *alpha* \* *op*\ (*A*) \* *op*\ (*B*) + *beta* \* *C*
    
    Here, *alpha* and *beta* are scalar parameters, and *op()* is either the identity or
    matrix transposition (depending on *transpose_a*, *transpose_b*).
    
    If *n>2*, *gemm* is performed separately for a batch of matrices. The column indices of the matrices
    are given by the last dimensions of the tensors, the row indices by the axis specified with the *axis*
    parameter. By default, the trailing two dimensions will be used for matrix encoding.
    
    For a non-default axis parameter, the operation performed is equivalent to a series of swapaxes/gemm/swapaxes
    calls. For example let *A*, *B*, *C* be 5 dimensional tensors. Then gemm(*A*, *B*, *C*, axis=1) is equivalent
    to the following without the overhead of the additional swapaxis operations::
    
        A1 = swapaxes(A, dim1=1, dim2=3)
        B1 = swapaxes(B, dim1=1, dim2=3)
        C = swapaxes(C, dim1=1, dim2=3)
        C = gemm(A1, B1, C)
        C = swapaxis(C, dim1=1, dim2=3)
    
    When the input data is of type float32 and the environment variables MXNET_CUDA_ALLOW_TENSOR_CORE
    and MXNET_CUDA_TENSOR_OP_MATH_ALLOW_CONVERSION are set to 1, this operator will try to use
    pseudo-float16 precision (float32 math with float16 I/O) precision in order to use
    Tensor Cores on suitable NVIDIA GPUs. This can sometimes give significant speedups.
    
    .. note:: The operator supports float32 and float64 data types only.
    
    Examples::
    
       Single matrix multiply-add
       A = `[ [1.0, 1.0], [1.0, 1.0] ]
       B = `[ [1.0, 1.0], [1.0, 1.0], [1.0, 1.0] ]
       C = `[ [1.0, 1.0, 1.0], [1.0, 1.0, 1.0] ]
       gemm(A, B, C, transpose_b=True, alpha=2.0, beta=10.0)
               = `[ [14.0, 14.0, 14.0], [14.0, 14.0, 14.0] ]
    
       Batch matrix multiply-add
       A = `[ `[ [1.0, 1.0] ], `[ [0.1, 0.1] ] ]
       B = `[ `[ [1.0, 1.0] ], `[ [0.1, 0.1] ] ]
       C = `[ `[ [10.0] ], `[ [0.01] ] ]
       gemm(A, B, C, transpose_b=True, alpha=2.0 , beta=10.0)
               = `[ `[ [104.0] ], `[ [0.14] ] ]
    
    
    Defined in src/operator/tensor/la_op.cc:L88
    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  130. abstract def linalg_gemm2(po: linalg_gemm2Param): Array[NDArray]

    Permalink

    Performs general matrix multiplication.
    Input are tensors *A*, *B*, each of dimension *n >= 2* and having the same shape
    on the leading *n-2* dimensions.
    
    If *n=2*, the BLAS3 function *gemm* is performed:
    
       *out* = *alpha* \* *op*\ (*A*) \* *op*\ (*B*)
    
    Here *alpha* is a scalar parameter and *op()* is either the identity or the matrix
    transposition (depending on *transpose_a*, *transpose_b*).
    
    If *n>2*, *gemm* is performed separately for a batch of matrices. The column indices of the matrices
    are given by the last dimensions of the tensors, the row indices by the axis specified with the *axis*
    parameter. By default, the trailing two dimensions will be used for matrix encoding.
    
    For a non-default axis parameter, the operation performed is equivalent to a series of swapaxes/gemm/swapaxes
    calls. For example let *A*, *B* be 5 dimensional tensors. Then gemm(*A*, *B*, axis=1) is equivalent to
    the following without the overhead of the additional swapaxis operations::
    
        A1 = swapaxes(A, dim1=1, dim2=3)
        B1 = swapaxes(B, dim1=1, dim2=3)
        C = gemm2(A1, B1)
        C = swapaxis(C, dim1=1, dim2=3)
    
    When the input data is of type float32 and the environment variables MXNET_CUDA_ALLOW_TENSOR_CORE
    and MXNET_CUDA_TENSOR_OP_MATH_ALLOW_CONVERSION are set to 1, this operator will try to use
    pseudo-float16 precision (float32 math with float16 I/O) precision in order to use
    Tensor Cores on suitable NVIDIA GPUs. This can sometimes give significant speedups.
    
    .. note:: The operator supports float32 and float64 data types only.
    
    Examples::
    
       Single matrix multiply
       A = `[ [1.0, 1.0], [1.0, 1.0] ]
       B = `[ [1.0, 1.0], [1.0, 1.0], [1.0, 1.0] ]
       gemm2(A, B, transpose_b=True, alpha=2.0)
                = `[ [4.0, 4.0, 4.0], [4.0, 4.0, 4.0] ]
    
       Batch matrix multiply
       A = `[ `[ [1.0, 1.0] ], `[ [0.1, 0.1] ] ]
       B = `[ `[ [1.0, 1.0] ], `[ [0.1, 0.1] ] ]
       gemm2(A, B, transpose_b=True, alpha=2.0)
               = `[ `[ [4.0] ], `[ [0.04 ] ] ]
    
    
    Defined in src/operator/tensor/la_op.cc:L162
    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  131. abstract def linalg_inverse(A: NDArray, out: NDArray): Array[NDArray]

    Permalink

    Compute the inverse of a matrix.
    Input is a tensor *A* of dimension *n >= 2*.
    
    If *n=2*, *A* is a square matrix. We compute:
    
      *out* = *A*\ :sup:`-1`
    
    If *n>2*, *inverse* is performed separately on the trailing two dimensions
    for all inputs (batch mode).
    
    .. note:: The operator supports float32 and float64 data types only.
    
    Examples::
    
       Single matrix inverse
       A = `[ [1., 4.], [2., 3.] ]
       inverse(A) = `[ [-0.6, 0.8], [0.4, -0.2] ]
    
       Batch matrix inverse
       A = `[ `[ [1., 4.], [2., 3.] ],
            `[ [1., 3.], [2., 4.] ] ]
       inverse(A) = `[ `[ [-0.6, 0.8], [0.4, -0.2] ],
                     `[ [-2., 1.5], [1., -0.5] ] ]
    
    
    Defined in src/operator/tensor/la_op.cc:L919
    A

    Tensor of square matrix

    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  132. abstract def linalg_makediag(A: NDArray, offset: Integer, out: NDArray): Array[NDArray]

    Permalink

    Constructs a square matrix with the input as diagonal.
    Input is a tensor *A* of dimension *n >= 1*.
    
    If *n=1*, then *A* represents the diagonal entries of a single square matrix. This matrix will be returned as a 2-dimensional tensor.
    If *n>1*, then *A* represents a batch of diagonals of square matrices. The batch of diagonal matrices will be returned as an *n+1*-dimensional tensor.
    
    .. note:: The operator supports float32 and float64 data types only.
    
    Examples::
    
        Single diagonal matrix construction
        A = [1.0, 2.0]
    
        makediag(A)    = `[ [1.0, 0.0],
                          [0.0, 2.0] ]
    
        makediag(A, 1) = `[ [0.0, 1.0, 0.0],
                          [0.0, 0.0, 2.0],
                          [0.0, 0.0, 0.0] ]
    
        Batch diagonal matrix construction
        A = `[ [1.0, 2.0],
             [3.0, 4.0] ]
    
        makediag(A) = `[ `[ [1.0, 0.0],
                        [0.0, 2.0] ],
                       `[ [3.0, 0.0],
                        [0.0, 4.0] ] ]
    
    
    Defined in src/operator/tensor/la_op.cc:L546
    A

    Tensor of diagonal entries

    offset

    Offset of the diagonal versus the main diagonal. 0 corresponds to the main diagonal, a negative/positive value to diagonals below/above the main diagonal.

    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  133. abstract def linalg_maketrian(po: linalg_maketrianParam): Array[NDArray]

    Permalink

    Constructs a square matrix with the input representing a specific triangular sub-matrix.
    This is basically the inverse of *linalg.extracttrian*. Input is a tensor *A* of dimension *n >= 1*.
    
    If *n=1*, then *A* represents the entries of a triangular matrix which is lower triangular if *offset<0* or *offset=0*, *lower=true*. The resulting matrix is derived by first constructing the square
    matrix with the entries outside the triangle set to zero and then adding *offset*-times an additional
    diagonal with zero entries to the square matrix.
    
    If *n>1*, then *A* represents a batch of triangular sub-matrices. The batch of corresponding square matrices is returned as an *n+1*-dimensional tensor.
    
    .. note:: The operator supports float32 and float64 data types only.
    
    Examples::
    
        Single  matrix construction
        A = [1.0, 2.0, 3.0]
    
        maketrian(A)              = `[ [1.0, 0.0],
                                     [2.0, 3.0] ]
    
        maketrian(A, lower=false) = `[ [1.0, 2.0],
                                     [0.0, 3.0] ]
    
        maketrian(A, offset=1)    = `[ [0.0, 1.0, 2.0],
                                     [0.0, 0.0, 3.0],
                                     [0.0, 0.0, 0.0] ]
        maketrian(A, offset=-1)   = `[ [0.0, 0.0, 0.0],
                                     [1.0, 0.0, 0.0],
                                     [2.0, 3.0, 0.0] ]
    
        Batch matrix construction
        A = `[ [1.0, 2.0, 3.0],
             [4.0, 5.0, 6.0] ]
    
        maketrian(A)           = `[ `[ [1.0, 0.0],
                                   [2.0, 3.0] ],
                                  `[ [4.0, 0.0],
                                   [5.0, 6.0] ] ]
    
        maketrian(A, offset=1) = `[ `[ [0.0, 1.0, 2.0],
                                   [0.0, 0.0, 3.0],
                                   [0.0, 0.0, 0.0] ],
                                  `[ [0.0, 4.0, 5.0],
                                   [0.0, 0.0, 6.0],
                                   [0.0, 0.0, 0.0] ] ]
    
    
    Defined in src/operator/tensor/la_op.cc:L672
    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  134. abstract def linalg_potrf(A: NDArray, out: NDArray): Array[NDArray]

    Permalink

    Performs Cholesky factorization of a symmetric positive-definite matrix.
    Input is a tensor *A* of dimension *n >= 2*.
    
    If *n=2*, the Cholesky factor *B* of the symmetric, positive definite matrix *A* is
    computed. *B* is triangular (entries of upper or lower triangle are all zero), has
    positive diagonal entries, and:
    
      *A* = *B* \* *B*\ :sup:`T`  if *lower* = *true*
      *A* = *B*\ :sup:`T` \* *B*  if *lower* = *false*
    
    If *n>2*, *potrf* is performed separately on the trailing two dimensions for all inputs
    (batch mode).
    
    .. note:: The operator supports float32 and float64 data types only.
    
    Examples::
    
       Single matrix factorization
       A = `[ [4.0, 1.0], [1.0, 4.25] ]
       potrf(A) = `[ [2.0, 0], [0.5, 2.0] ]
    
       Batch matrix factorization
       A = `[ `[ [4.0, 1.0], [1.0, 4.25] ], `[ [16.0, 4.0], [4.0, 17.0] ] ]
       potrf(A) = `[ `[ [2.0, 0], [0.5, 2.0] ], `[ [4.0, 0], [1.0, 4.0] ] ]
    
    
    Defined in src/operator/tensor/la_op.cc:L213
    A

    Tensor of input matrices to be decomposed

    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  135. abstract def linalg_potri(A: NDArray, out: NDArray): Array[NDArray]

    Permalink

    Performs matrix inversion from a Cholesky factorization.
    Input is a tensor *A* of dimension *n >= 2*.
    
    If *n=2*, *A* is a triangular matrix (entries of upper or lower triangle are all zero)
    with positive diagonal. We compute:
    
      *out* = *A*\ :sup:`-T` \* *A*\ :sup:`-1` if *lower* = *true*
      *out* = *A*\ :sup:`-1` \* *A*\ :sup:`-T` if *lower* = *false*
    
    In other words, if *A* is the Cholesky factor of a symmetric positive definite matrix
    *B* (obtained by *potrf*), then
    
      *out* = *B*\ :sup:`-1`
    
    If *n>2*, *potri* is performed separately on the trailing two dimensions for all inputs
    (batch mode).
    
    .. note:: The operator supports float32 and float64 data types only.
    
    .. note:: Use this operator only if you are certain you need the inverse of *B*, and
              cannot use the Cholesky factor *A* (*potrf*), together with backsubstitution
              (*trsm*). The latter is numerically much safer, and also cheaper.
    
    Examples::
    
       Single matrix inverse
       A = `[ [2.0, 0], [0.5, 2.0] ]
       potri(A) = `[ [0.26563, -0.0625], [-0.0625, 0.25] ]
    
       Batch matrix inverse
       A = `[ `[ [2.0, 0], [0.5, 2.0] ], `[ [4.0, 0], [1.0, 4.0] ] ]
       potri(A) = `[ `[ [0.26563, -0.0625], [-0.0625, 0.25] ],
                   `[ [0.06641, -0.01562], [-0.01562, 0,0625] ] ]
    
    
    Defined in src/operator/tensor/la_op.cc:L274
    A

    Tensor of lower triangular matrices

    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  136. abstract def linalg_slogdet(A: NDArray, out: NDArray): Array[NDArray]

    Permalink

    Compute the sign and log of the determinant of a matrix.
    Input is a tensor *A* of dimension *n >= 2*.
    
    If *n=2*, *A* is a square matrix. We compute:
    
      *sign* = *sign(det(A))*
      *logabsdet* = *log(abs(det(A)))*
    
    If *n>2*, *slogdet* is performed separately on the trailing two dimensions
    for all inputs (batch mode).
    
    .. note:: The operator supports float32 and float64 data types only.
    .. note:: The gradient is not properly defined on sign, so the gradient of
              it is not backwarded.
    .. note:: No gradient is backwarded when A is non-invertible. Please see
              the docs of operator det for detail.
    
    Examples::
    
       Single matrix signed log determinant
       A = `[ [2., 3.], [1., 4.] ]
       sign, logabsdet = slogdet(A)
       sign = [1.]
       logabsdet = [1.609438]
    
       Batch matrix signed log determinant
       A = `[ `[ [2., 3.], [1., 4.] ],
            `[ [1., 2.], [2., 4.] ],
            `[ [1., 2.], [4., 3.] ] ]
       sign, logabsdet = slogdet(A)
       sign = [1., 0., -1.]
       logabsdet = [1.609438, -inf, 1.609438]
    
    
    Defined in src/operator/tensor/la_op.cc:L1033
    A

    Tensor of square matrix

    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  137. abstract def linalg_sumlogdiag(A: NDArray, out: NDArray): Array[NDArray]

    Permalink

    Computes the sum of the logarithms of the diagonal elements of a square matrix.
    Input is a tensor *A* of dimension *n >= 2*.
    
    If *n=2*, *A* must be square with positive diagonal entries. We sum the natural
    logarithms of the diagonal elements, the result has shape (1,).
    
    If *n>2*, *sumlogdiag* is performed separately on the trailing two dimensions for all
    inputs (batch mode).
    
    .. note:: The operator supports float32 and float64 data types only.
    
    Examples::
    
       Single matrix reduction
       A = `[ [1.0, 1.0], [1.0, 7.0] ]
       sumlogdiag(A) = [1.9459]
    
       Batch matrix reduction
       A = `[ `[ [1.0, 1.0], [1.0, 7.0] ], `[ [3.0, 0], [0, 17.0] ] ]
       sumlogdiag(A) = [1.9459, 3.9318]
    
    
    Defined in src/operator/tensor/la_op.cc:L444
    A

    Tensor of square matrices

    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  138. abstract def linalg_syrk(po: linalg_syrkParam): Array[NDArray]

    Permalink

    Multiplication of matrix with its transpose.
    Input is a tensor *A* of dimension *n >= 2*.
    
    If *n=2*, the operator performs the BLAS3 function *syrk*:
    
      *out* = *alpha* \* *A* \* *A*\ :sup:`T`
    
    if *transpose=False*, or
    
      *out* = *alpha* \* *A*\ :sup:`T` \ \* *A*
    
    if *transpose=True*.
    
    If *n>2*, *syrk* is performed separately on the trailing two dimensions for all
    inputs (batch mode).
    
    .. note:: The operator supports float32 and float64 data types only.
    
    Examples::
    
       Single matrix multiply
       A = `[ [1., 2., 3.], [4., 5., 6.] ]
       syrk(A, alpha=1., transpose=False)
                = `[ [14., 32.],
                   [32., 77.] ]
       syrk(A, alpha=1., transpose=True)
                = `[ [17., 22., 27.],
                   [22., 29., 36.],
                   [27., 36., 45.] ]
    
       Batch matrix multiply
       A = `[ `[ [1., 1.] ], `[ [0.1, 0.1] ] ]
       syrk(A, alpha=2., transpose=False) = `[ `[ [4.] ], `[ [0.04] ] ]
    
    
    Defined in src/operator/tensor/la_op.cc:L729
    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  139. abstract def linalg_trmm(po: linalg_trmmParam): Array[NDArray]

    Permalink

    Performs multiplication with a lower triangular matrix.
    Input are tensors *A*, *B*, each of dimension *n >= 2* and having the same shape
    on the leading *n-2* dimensions.
    
    If *n=2*, *A* must be triangular. The operator performs the BLAS3 function
    *trmm*:
    
       *out* = *alpha* \* *op*\ (*A*) \* *B*
    
    if *rightside=False*, or
    
       *out* = *alpha* \* *B* \* *op*\ (*A*)
    
    if *rightside=True*. Here, *alpha* is a scalar parameter, and *op()* is either the
    identity or the matrix transposition (depending on *transpose*).
    
    If *n>2*, *trmm* is performed separately on the trailing two dimensions for all inputs
    (batch mode).
    
    .. note:: The operator supports float32 and float64 data types only.
    
    Examples::
    
       Single triangular matrix multiply
       A = `[ [1.0, 0], [1.0, 1.0] ]
       B = `[ [1.0, 1.0, 1.0], [1.0, 1.0, 1.0] ]
       trmm(A, B, alpha=2.0) = `[ [2.0, 2.0, 2.0], [4.0, 4.0, 4.0] ]
    
       Batch triangular matrix multiply
       A = `[ `[ [1.0, 0], [1.0, 1.0] ], `[ [1.0, 0], [1.0, 1.0] ] ]
       B = `[ `[ [1.0, 1.0, 1.0], [1.0, 1.0, 1.0] ], `[ [0.5, 0.5, 0.5], [0.5, 0.5, 0.5] ] ]
       trmm(A, B, alpha=2.0) = `[ `[ [2.0, 2.0, 2.0], [4.0, 4.0, 4.0] ],
                                `[ [1.0, 1.0, 1.0], [2.0, 2.0, 2.0] ] ]
    
    
    Defined in src/operator/tensor/la_op.cc:L332
    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  140. abstract def linalg_trsm(po: linalg_trsmParam): Array[NDArray]

    Permalink

    Solves matrix equation involving a lower triangular matrix.
    Input are tensors *A*, *B*, each of dimension *n >= 2* and having the same shape
    on the leading *n-2* dimensions.
    
    If *n=2*, *A* must be triangular. The operator performs the BLAS3 function
    *trsm*, solving for *out* in:
    
       *op*\ (*A*) \* *out* = *alpha* \* *B*
    
    if *rightside=False*, or
    
       *out* \* *op*\ (*A*) = *alpha* \* *B*
    
    if *rightside=True*. Here, *alpha* is a scalar parameter, and *op()* is either the
    identity or the matrix transposition (depending on *transpose*).
    
    If *n>2*, *trsm* is performed separately on the trailing two dimensions for all inputs
    (batch mode).
    
    .. note:: The operator supports float32 and float64 data types only.
    
    Examples::
    
       Single matrix solve
       A = `[ [1.0, 0], [1.0, 1.0] ]
       B = `[ [2.0, 2.0, 2.0], [4.0, 4.0, 4.0] ]
       trsm(A, B, alpha=0.5) = `[ [1.0, 1.0, 1.0], [1.0, 1.0, 1.0] ]
    
       Batch matrix solve
       A = `[ `[ [1.0, 0], [1.0, 1.0] ], `[ [1.0, 0], [1.0, 1.0] ] ]
       B = `[ `[ [2.0, 2.0, 2.0], [4.0, 4.0, 4.0] ],
            `[ [4.0, 4.0, 4.0], [8.0, 8.0, 8.0] ] ]
       trsm(A, B, alpha=0.5) = `[ `[ [1.0, 1.0, 1.0], [1.0, 1.0, 1.0] ],
                                `[ [2.0, 2.0, 2.0], [2.0, 2.0, 2.0] ] ]
    
    
    Defined in src/operator/tensor/la_op.cc:L395
    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  141. abstract def log(data: NDArray, out: NDArray): Array[NDArray]

    Permalink

    Returns element-wise Natural logarithmic value of the input.
    
    The natural logarithm is logarithm in base *e*, so that ``log(exp(x)) = x``
    
    The storage type of ``log`` output is always dense
    
    
    
    Defined in src/operator/tensor/elemwise_unary_op_logexp.cc:L77
    data

    The input array.

    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  142. abstract def log10(data: NDArray, out: NDArray): Array[NDArray]

    Permalink

    Returns element-wise Base-10 logarithmic value of the input.
    
    ``10**log10(x) = x``
    
    The storage type of ``log10`` output is always dense
    
    
    
    Defined in src/operator/tensor/elemwise_unary_op_logexp.cc:L94
    data

    The input array.

    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  143. abstract def log1p(data: NDArray, out: NDArray): Array[NDArray]

    Permalink

    Returns element-wise ``log(1 + x)`` value of the input.
    
    This function is more accurate than ``log(1 + x)``  for small ``x`` so that
    :math:`1+x\approx 1`
    
    The storage type of ``log1p`` output depends upon the input storage type:
    
       - log1p(default) = default
       - log1p(row_sparse) = row_sparse
       - log1p(csr) = csr
    
    
    
    Defined in src/operator/tensor/elemwise_unary_op_logexp.cc:L199
    data

    The input array.

    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  144. abstract def log2(data: NDArray, out: NDArray): Array[NDArray]

    Permalink

    Returns element-wise Base-2 logarithmic value of the input.
    
    ``2**log2(x) = x``
    
    The storage type of ``log2`` output is always dense
    
    
    
    Defined in src/operator/tensor/elemwise_unary_op_logexp.cc:L106
    data

    The input array.

    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  145. abstract def log_softmax(po: log_softmaxParam): Array[NDArray]

    Permalink

    Computes the log softmax of the input.
    This is equivalent to computing softmax followed by log.
    
    Examples::
    
      >>> x = mx.nd.array([1, 2, .1])
      >>> mx.nd.log_softmax(x).asnumpy()
      array([-1.41702998, -0.41702995, -2.31702995], dtype=float32)
    
      >>> x = mx.nd.array( `[ [1, 2, .1],[.1, 2, 1] ] )
      >>> mx.nd.log_softmax(x, axis=0).asnumpy()
      array(`[ [-0.34115392, -0.69314718, -1.24115396],
             [-1.24115396, -0.69314718, -0.34115392] ], dtype=float32)
    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  146. abstract def logical_not(data: NDArray, out: NDArray): Array[NDArray]

    Permalink

    Returns the result of logical NOT (!) function
    
    Example:
      logical_not([-2., 0., 1.]) = [0., 1., 0.]
    data

    The input array.

    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  147. abstract def make_loss(data: NDArray, out: NDArray): Array[NDArray]

    Permalink

    Make your own loss function in network construction.
    
    This operator accepts a customized loss function symbol as a terminal loss and
    the symbol should be an operator with no backward dependency.
    The output of this function is the gradient of loss with respect to the input data.
    
    For example, if you are a making a cross entropy loss function. Assume ``out`` is the
    predicted output and ``label`` is the true label, then the cross entropy can be defined as::
    
      cross_entropy = label * log(out) + (1 - label) * log(1 - out)
      loss = make_loss(cross_entropy)
    
    We will need to use ``make_loss`` when we are creating our own loss function or we want to
    combine multiple loss functions. Also we may want to stop some variables' gradients
    from backpropagation. See more detail in ``BlockGrad`` or ``stop_gradient``.
    
    The storage type of ``make_loss`` output depends upon the input storage type:
    
       - make_loss(default) = default
       - make_loss(row_sparse) = row_sparse
    
    
    
    Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L358
    data

    The input array.

    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  148. abstract def max(po: maxParam): Array[NDArray]

    Permalink

    Computes the max of array elements over given axes.
    
    Defined in src/operator/tensor/./broadcast_reduce_op.h:L31
    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  149. abstract def max_axis(po: max_axisParam): Array[NDArray]

    Permalink

    Computes the max of array elements over given axes.
    
    Defined in src/operator/tensor/./broadcast_reduce_op.h:L31
    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  150. abstract def mean(po: meanParam): Array[NDArray]

    Permalink

    Computes the mean of array elements over given axes.
    
    Defined in src/operator/tensor/./broadcast_reduce_op.h:L83
    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  151. abstract def min(po: minParam): Array[NDArray]

    Permalink

    Computes the min of array elements over given axes.
    
    Defined in src/operator/tensor/./broadcast_reduce_op.h:L46
    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  152. abstract def min_axis(po: min_axisParam): Array[NDArray]

    Permalink

    Computes the min of array elements over given axes.
    
    Defined in src/operator/tensor/./broadcast_reduce_op.h:L46
    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  153. abstract def moments(po: momentsParam): Array[NDArray]

    Permalink

    Calculate the mean and variance of `data`.
    
    The mean and variance are calculated by aggregating the contents of data across axes.
    If x is 1-D and axes = [0] this is just the mean and variance of a vector.
    
    Example:
    
         x = `[ [1, 2, 3], [4, 5, 6] ]
         mean, var = moments(data=x, axes=[0])
         mean = [2.5, 3.5, 4.5]
         var = [2.25, 2.25, 2.25]
         mean, var = moments(data=x, axes=[1])
         mean = [2.0, 5.0]
         var = [0.66666667, 0.66666667]
         mean, var = moments(data=x, axis=[0, 1])
         mean = [3.5]
         var = [2.9166667]
    
    
    
    Defined in src/operator/nn/moments.cc:L53
    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  154. abstract def mp_lamb_update_phase1(po: mp_lamb_update_phase1Param): Array[NDArray]

    Permalink

    Mixed Precision version of Phase I of lamb update
    it performs the following operations and returns g:.
    
              Link to paper: https://arxiv.org/pdf/1904.00962.pdf
    
              .. math::
                  \begin{gather*}
                  grad32 = grad(float16) * rescale_grad
                  if (grad < -clip_gradient)
                  then
                       grad = -clip_gradient
                  if (grad > clip_gradient)
                  then
                       grad = clip_gradient
    
                  mean = beta1 * mean + (1 - beta1) * grad;
                  variance = beta2 * variance + (1. - beta2) * grad ^ 2;
    
                  if (bias_correction)
                  then
                       mean_hat = mean / (1. - beta1^t);
                       var_hat = var / (1 - beta2^t);
                       g = mean_hat / (var_hat^(1/2) + epsilon) + wd * weight32;
                  else
                       g = mean / (var_data^(1/2) + epsilon) + wd * weight32;
                  \end{gather*}
    
    
    
    Defined in src/operator/optimizer_op.cc:L1032
    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  155. abstract def mp_lamb_update_phase2(po: mp_lamb_update_phase2Param): Array[NDArray]

    Permalink

    Mixed Precision version Phase II of lamb update
    it performs the following operations and updates grad.
    
              Link to paper: https://arxiv.org/pdf/1904.00962.pdf
    
              .. math::
                  \begin{gather*}
                  if (lower_bound >= 0)
                  then
                       r1 = max(r1, lower_bound)
                  if (upper_bound >= 0)
                  then
                       r1 = max(r1, upper_bound)
    
                  if (r1 == 0 or r2 == 0)
                  then
                       lr = lr
                  else
                       lr = lr * (r1/r2)
                  weight32 = weight32 - lr * g
                  weight(float16) = weight32
                  \end{gather*}
    
    
    
    Defined in src/operator/optimizer_op.cc:L1074
    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  156. abstract def mp_nag_mom_update(po: mp_nag_mom_updateParam): Array[NDArray]

    Permalink

    Update function for multi-precision Nesterov Accelerated Gradient( NAG) optimizer.
    
    
    Defined in src/operator/optimizer_op.cc:L744
    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  157. abstract def mp_sgd_mom_update(po: mp_sgd_mom_updateParam): Array[NDArray]

    Permalink

    Updater function for multi-precision sgd optimizer
    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  158. abstract def mp_sgd_update(po: mp_sgd_updateParam): Array[NDArray]

    Permalink

    Updater function for multi-precision sgd optimizer
    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  159. abstract def multi_all_finite(po: multi_all_finiteParam): Array[NDArray]

    Permalink

    Check if all the float numbers in all the arrays are finite (used for AMP)
    
    
    Defined in src/operator/contrib/all_finite.cc:L132
    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  160. abstract def multi_lars(lrs: NDArray, weights_sum_sq: NDArray, grads_sum_sq: NDArray, wds: NDArray, eta: Float, eps: Float, rescale_grad: Float, out: NDArray): Array[NDArray]

    Permalink

    Compute the LARS coefficients of multiple weights and grads from their sums of square"
    
    
    Defined in src/operator/contrib/multi_lars.cc:L36
    lrs

    Learning rates to scale by LARS coefficient

    weights_sum_sq

    sum of square of weights arrays

    grads_sum_sq

    sum of square of gradients arrays

    wds

    weight decays

    eta

    LARS eta

    eps

    LARS eps

    rescale_grad

    Gradient rescaling factor

    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  161. abstract def multi_mp_sgd_mom_update(po: multi_mp_sgd_mom_updateParam): Array[NDArray]

    Permalink

    Momentum update function for multi-precision Stochastic Gradient Descent (SGD) optimizer.
    
    Momentum update has better convergence rates on neural networks. Mathematically it looks
    like below:
    
    .. math::
    
      v_1 = \alpha * \nabla J(W_0)\\
      v_t = \gamma v_{t-1} - \alpha * \nabla J(W_{t-1})\\
      W_t = W_{t-1} + v_t
    
    It updates the weights using::
    
      v = momentum * v - learning_rate * gradient
      weight += v
    
    Where the parameter ``momentum`` is the decay rate of momentum estimates at each epoch.
    
    
    
    Defined in src/operator/optimizer_op.cc:L471
    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  162. abstract def multi_mp_sgd_update(po: multi_mp_sgd_updateParam): Array[NDArray]

    Permalink

    Update function for multi-precision Stochastic Gradient Descent (SDG) optimizer.
    
    It updates the weights using::
    
     weight = weight - learning_rate * (gradient + wd * weight)
    
    
    
    Defined in src/operator/optimizer_op.cc:L416
    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  163. abstract def multi_sgd_mom_update(po: multi_sgd_mom_updateParam): Array[NDArray]

    Permalink

    Momentum update function for Stochastic Gradient Descent (SGD) optimizer.
    
    Momentum update has better convergence rates on neural networks. Mathematically it looks
    like below:
    
    .. math::
    
      v_1 = \alpha * \nabla J(W_0)\\
      v_t = \gamma v_{t-1} - \alpha * \nabla J(W_{t-1})\\
      W_t = W_{t-1} + v_t
    
    It updates the weights using::
    
      v = momentum * v - learning_rate * gradient
      weight += v
    
    Where the parameter ``momentum`` is the decay rate of momentum estimates at each epoch.
    
    
    
    Defined in src/operator/optimizer_op.cc:L373
    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  164. abstract def multi_sgd_update(po: multi_sgd_updateParam): Array[NDArray]

    Permalink

    Update function for Stochastic Gradient Descent (SDG) optimizer.
    
    It updates the weights using::
    
     weight = weight - learning_rate * (gradient + wd * weight)
    
    
    
    Defined in src/operator/optimizer_op.cc:L328
    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  165. abstract def multi_sum_sq(data: Array[NDArray], num_arrays: Integer, out: NDArray): Array[NDArray]

    Permalink

    Compute the sums of squares of multiple arrays
    
    
    Defined in src/operator/contrib/multi_sum_sq.cc:L35
    data

    Arrays

    num_arrays

    number of input arrays.

    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  166. abstract def nag_mom_update(po: nag_mom_updateParam): Array[NDArray]

    Permalink

    Update function for Nesterov Accelerated Gradient( NAG) optimizer.
    It updates the weights using the following formula,
    
    .. math::
      v_t = \gamma v_{t-1} + \eta * \nabla J(W_{t-1} - \gamma v_{t-1})\\
      W_t = W_{t-1} - v_t
    
    Where
    :math:`\eta` is the learning rate of the optimizer
    :math:`\gamma` is the decay rate of the momentum estimate
    :math:`\v_t` is the update vector at time step `t`
    :math:`\W_t` is the weight vector at time step `t`
    
    
    
    Defined in src/operator/optimizer_op.cc:L725
    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  167. abstract def nanprod(po: nanprodParam): Array[NDArray]

    Permalink

    Computes the product of array elements over given axes treating Not a Numbers (``NaN``) as one.
    
    
    
    Defined in src/operator/tensor/broadcast_reduce_prod_value.cc:L46
    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  168. abstract def nansum(po: nansumParam): Array[NDArray]

    Permalink

    Computes the sum of array elements over given axes treating Not a Numbers (``NaN``) as zero.
    
    
    
    Defined in src/operator/tensor/broadcast_reduce_sum_value.cc:L101
    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  169. abstract def negative(data: NDArray, out: NDArray): Array[NDArray]

    Permalink

    Numerical negative of the argument, element-wise.
    
    The storage type of ``negative`` output depends upon the input storage type:
    
       - negative(default) = default
       - negative(row_sparse) = row_sparse
       - negative(csr) = csr
    data

    The input array.

    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  170. abstract def norm(po: normParam): Array[NDArray]

    Permalink

    Computes the norm on an NDArray.
    
    This operator computes the norm on an NDArray with the specified axis, depending
    on the value of the ord parameter. By default, it computes the L2 norm on the entire
    array. Currently only ord=2 supports sparse ndarrays.
    
    Examples::
    
      x = `[ `[ [1, 2],
            [3, 4] ],
           `[ [2, 2],
            [5, 6] ] ]
    
      norm(x, ord=2, axis=1) = `[ [3.1622777 4.472136 ]
                                [5.3851647 6.3245554] ]
    
      norm(x, ord=1, axis=1) = `[ [4., 6.],
                                [7., 8.] ]
    
      rsp = x.cast_storage('row_sparse')
    
      norm(rsp) = [5.47722578]
    
      csr = x.cast_storage('csr')
    
      norm(csr) = [5.47722578]
    
    
    
    Defined in src/operator/tensor/broadcast_reduce_norm_value.cc:L88
    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  171. abstract def normal(po: normalParam): Array[NDArray]

    Permalink

    Draw random samples from a normal (Gaussian) distribution.
    
    .. note:: The existing alias ``normal`` is deprecated.
    
    Samples are distributed according to a normal distribution parametrized by *loc* (mean) and *scale*
    (standard deviation).
    
    Example::
    
       normal(loc=0, scale=1, shape=(2,2)) = `[ [ 1.89171135, -1.16881478],
                                              [-1.23474145,  1.55807114] ]
    
    
    Defined in src/operator/random/sample_op.cc:L112
    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  172. abstract def one_hot(po: one_hotParam): Array[NDArray]

    Permalink

    Returns a one-hot array.
    
    The locations represented by `indices` take value `on_value`, while all
    other locations take value `off_value`.
    
    `one_hot` operation with `indices` of shape ``(i0, i1)`` and `depth`  of ``d`` would result
    in an output array of shape ``(i0, i1, d)`` with::
    
      output[i,j,:] = off_value
      output[i,j,indices[i,j] ] = on_value
    
    Examples::
    
      one_hot([1,0,2,0], 3) = `[ [ 0.  1.  0.]
                               [ 1.  0.  0.]
                               [ 0.  0.  1.]
                               [ 1.  0.  0.] ]
    
      one_hot([1,0,2,0], 3, on_value=8, off_value=1,
              dtype='int32') = `[ [1 8 1]
                                [8 1 1]
                                [1 1 8]
                                [8 1 1] ]
    
      one_hot(`[ [1,0],[1,0],[2,0] ], 3) = `[ `[ [ 0.  1.  0.]
                                          [ 1.  0.  0.] ]
    
                                         `[ [ 0.  1.  0.]
                                          [ 1.  0.  0.] ]
    
                                         `[ [ 0.  0.  1.]
                                          [ 1.  0.  0.] ] ]
    
    
    Defined in src/operator/tensor/indexing_op.cc:L882
    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  173. abstract def ones_like(data: NDArray, out: NDArray): Array[NDArray]

    Permalink

    Return an array of ones with the same shape and type
    as the input array.
    
    Examples::
    
      x = `[ [ 0.,  0.,  0.],
           [ 0.,  0.,  0.] ]
    
      ones_like(x) = `[ [ 1.,  1.,  1.],
                      [ 1.,  1.,  1.] ]
    data

    The input

    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  174. abstract def pad(data: NDArray, mode: String, pad_width: Shape, constant_value: Double, out: NDArray): Array[NDArray]

    Permalink

    Pads an input array with a constant or edge values of the array.
    
    .. note:: `Pad` is deprecated. Use `pad` instead.
    
    .. note:: Current implementation only supports 4D and 5D input arrays with padding applied
       only on axes 1, 2 and 3. Expects axes 4 and 5 in `pad_width` to be zero.
    
    This operation pads an input array with either a `constant_value` or edge values
    along each axis of the input array. The amount of padding is specified by `pad_width`.
    
    `pad_width` is a tuple of integer padding widths for each axis of the format
    ``(before_1, after_1, ... , before_N, after_N)``. The `pad_width` should be of length ``2*N``
    where ``N`` is the number of dimensions of the array.
    
    For dimension ``N`` of the input array, ``before_N`` and ``after_N`` indicates how many values
    to add before and after the elements of the array along dimension ``N``.
    The widths of the higher two dimensions ``before_1``, ``after_1``, ``before_2``,
    ``after_2`` must be 0.
    
    Example::
    
       x = `[ [`[ [  1.   2.   3.]
              [  4.   5.   6.] ]
    
             `[ [  7.   8.   9.]
              [ 10.  11.  12.] ] ]
    
    
            `[ `[ [ 11.  12.  13.]
              [ 14.  15.  16.] ]
    
             `[ [ 17.  18.  19.]
              [ 20.  21.  22.] ] ] ]
    
       pad(x,mode="edge", pad_width=(0,0,0,0,1,1,1,1)) =
    
             `[ [`[ [  1.   1.   2.   3.   3.]
                [  1.   1.   2.   3.   3.]
                [  4.   4.   5.   6.   6.]
                [  4.   4.   5.   6.   6.] ]
    
               `[ [  7.   7.   8.   9.   9.]
                [  7.   7.   8.   9.   9.]
                [ 10.  10.  11.  12.  12.]
                [ 10.  10.  11.  12.  12.] ] ]
    
    
              `[ `[ [ 11.  11.  12.  13.  13.]
                [ 11.  11.  12.  13.  13.]
                [ 14.  14.  15.  16.  16.]
                [ 14.  14.  15.  16.  16.] ]
    
               `[ [ 17.  17.  18.  19.  19.]
                [ 17.  17.  18.  19.  19.]
                [ 20.  20.  21.  22.  22.]
                [ 20.  20.  21.  22.  22.] ] ] ]
    
       pad(x, mode="constant", constant_value=0, pad_width=(0,0,0,0,1,1,1,1)) =
    
             `[ [`[ [  0.   0.   0.   0.   0.]
                [  0.   1.   2.   3.   0.]
                [  0.   4.   5.   6.   0.]
                [  0.   0.   0.   0.   0.] ]
    
               `[ [  0.   0.   0.   0.   0.]
                [  0.   7.   8.   9.   0.]
                [  0.  10.  11.  12.   0.]
                [  0.   0.   0.   0.   0.] ] ]
    
    
              `[ `[ [  0.   0.   0.   0.   0.]
                [  0.  11.  12.  13.   0.]
                [  0.  14.  15.  16.   0.]
                [  0.   0.   0.   0.   0.] ]
    
               `[ [  0.   0.   0.   0.   0.]
                [  0.  17.  18.  19.   0.]
                [  0.  20.  21.  22.   0.]
                [  0.   0.   0.   0.   0.] ] ] ]
    
    
    
    
    Defined in src/operator/pad.cc:L765
    data

    An n-dimensional input array.

    mode

    Padding type to use. "constant" pads with constant_value "edge" pads using the edge values of the input array "reflect" pads by reflecting values with respect to the edges.

    pad_width

    Widths of the padding regions applied to the edges of each axis. It is a tuple of integer padding widths for each axis of the format (before_1, after_1, ... , before_N, after_N). It should be of length 2*N where N is the number of dimensions of the array.This is equivalent to pad_width in numpy.pad, but flattened.

    constant_value

    The value used for padding when mode is "constant".

    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  175. abstract def pick(po: pickParam): Array[NDArray]

    Permalink

    Picks elements from an input array according to the input indices along the given axis.
    
    Given an input array of shape ``(d0, d1)`` and indices of shape ``(i0,)``, the result will be
    an output array of shape ``(i0,)`` with::
    
      output[i] = input[i, indices[i] ]
    
    By default, if any index mentioned is too large, it is replaced by the index that addresses
    the last element along an axis (the `clip` mode).
    
    This function supports n-dimensional input and (n-1)-dimensional indices arrays.
    
    Examples::
    
      x = `[ [ 1.,  2.],
           [ 3.,  4.],
           [ 5.,  6.] ]
    
      // picks elements with specified indices along axis 0
      pick(x, y=[0,1], 0) = [ 1.,  4.]
    
      // picks elements with specified indices along axis 1
      pick(x, y=[0,1,0], 1) = [ 1.,  4.,  5.]
    
      // picks elements with specified indices along axis 1 using 'wrap' mode
      // to place indicies that would normally be out of bounds
      pick(x, y=[2,-1,-2], 1, mode='wrap') = [ 1.,  4.,  5.]
    
      y = `[ [ 1.],
           [ 0.],
           [ 2.] ]
    
      // picks elements with specified indices along axis 1 and dims are maintained
      pick(x, y, 1, keepdims=True) = `[ [ 2.],
                                     [ 3.],
                                     [ 6.] ]
    
    
    
    Defined in src/operator/tensor/broadcast_reduce_op_index.cc:L150
    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  176. abstract def preloaded_multi_mp_sgd_mom_update(po: preloaded_multi_mp_sgd_mom_updateParam): Array[NDArray]

    Permalink

    Momentum update function for multi-precision Stochastic Gradient Descent (SGD) optimizer.
    
    Momentum update has better convergence rates on neural networks. Mathematically it looks
    like below:
    
    .. math::
    
      v_1 = \alpha * \nabla J(W_0)\\
      v_t = \gamma v_{t-1} - \alpha * \nabla J(W_{t-1})\\
      W_t = W_{t-1} + v_t
    
    It updates the weights using::
    
      v = momentum * v - learning_rate * gradient
      weight += v
    
    Where the parameter ``momentum`` is the decay rate of momentum estimates at each epoch.
    
    
    
    Defined in src/operator/contrib/preloaded_multi_sgd.cc:L199
    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  177. abstract def preloaded_multi_mp_sgd_update(po: preloaded_multi_mp_sgd_updateParam): Array[NDArray]

    Permalink

    Update function for multi-precision Stochastic Gradient Descent (SDG) optimizer.
    
    It updates the weights using::
    
     weight = weight - learning_rate * (gradient + wd * weight)
    
    
    
    Defined in src/operator/contrib/preloaded_multi_sgd.cc:L139
    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  178. abstract def preloaded_multi_sgd_mom_update(po: preloaded_multi_sgd_mom_updateParam): Array[NDArray]

    Permalink

    Momentum update function for Stochastic Gradient Descent (SGD) optimizer.
    
    Momentum update has better convergence rates on neural networks. Mathematically it looks
    like below:
    
    .. math::
    
      v_1 = \alpha * \nabla J(W_0)\\
      v_t = \gamma v_{t-1} - \alpha * \nabla J(W_{t-1})\\
      W_t = W_{t-1} + v_t
    
    It updates the weights using::
    
      v = momentum * v - learning_rate * gradient
      weight += v
    
    Where the parameter ``momentum`` is the decay rate of momentum estimates at each epoch.
    
    
    
    Defined in src/operator/contrib/preloaded_multi_sgd.cc:L90
    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  179. abstract def preloaded_multi_sgd_update(po: preloaded_multi_sgd_updateParam): Array[NDArray]

    Permalink

    Update function for Stochastic Gradient Descent (SDG) optimizer.
    
    It updates the weights using::
    
     weight = weight - learning_rate * (gradient + wd * weight)
    
    
    
    Defined in src/operator/contrib/preloaded_multi_sgd.cc:L41
    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  180. abstract def prod(po: prodParam): Array[NDArray]

    Permalink

    Computes the product of array elements over given axes.
    
    Defined in src/operator/tensor/./broadcast_reduce_op.h:L30
    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  181. abstract def radians(data: NDArray, out: NDArray): Array[NDArray]

    Permalink

    Converts each element of the input array from degrees to radians.
    
    .. math::
       radians([0, 90, 180, 270, 360]) = [0, \pi/2, \pi, 3\pi/2, 2\pi]
    
    The storage type of ``radians`` output depends upon the input storage type:
    
       - radians(default) = default
       - radians(row_sparse) = row_sparse
       - radians(csr) = csr
    
    
    
    Defined in src/operator/tensor/elemwise_unary_op_trig.cc:L351
    data

    The input array.

    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  182. abstract def random_exponential(po: random_exponentialParam): Array[NDArray]

    Permalink

    Draw random samples from an exponential distribution.
    
    Samples are distributed according to an exponential distribution parametrized by *lambda* (rate).
    
    Example::
    
       exponential(lam=4, shape=(2,2)) = `[ [ 0.0097189 ,  0.08999364],
                                          [ 0.04146638,  0.31715935] ]
    
    
    Defined in src/operator/random/sample_op.cc:L136
    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  183. abstract def random_gamma(po: random_gammaParam): Array[NDArray]

    Permalink

    Draw random samples from a gamma distribution.
    
    Samples are distributed according to a gamma distribution parametrized by *alpha* (shape) and *beta* (scale).
    
    Example::
    
       gamma(alpha=9, beta=0.5, shape=(2,2)) = `[ [ 7.10486984,  3.37695289],
                                                [ 3.91697288,  3.65933681] ]
    
    
    Defined in src/operator/random/sample_op.cc:L124
    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  184. abstract def random_generalized_negative_binomial(po: random_generalized_negative_binomialParam): Array[NDArray]

    Permalink

    Draw random samples from a generalized negative binomial distribution.
    
    Samples are distributed according to a generalized negative binomial distribution parametrized by
    *mu* (mean) and *alpha* (dispersion). *alpha* is defined as *1/k* where *k* is the failure limit of the
    number of unsuccessful experiments (generalized to real numbers).
    Samples will always be returned as a floating point data type.
    
    Example::
    
       generalized_negative_binomial(mu=2.0, alpha=0.3, shape=(2,2)) = `[ [ 2.,  1.],
                                                                        [ 6.,  4.] ]
    
    
    Defined in src/operator/random/sample_op.cc:L178
    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  185. abstract def random_negative_binomial(po: random_negative_binomialParam): Array[NDArray]

    Permalink

    Draw random samples from a negative binomial distribution.
    
    Samples are distributed according to a negative binomial distribution parametrized by
    *k* (limit of unsuccessful experiments) and *p* (failure probability in each experiment).
    Samples will always be returned as a floating point data type.
    
    Example::
    
       negative_binomial(k=3, p=0.4, shape=(2,2)) = `[ [ 4.,  7.],
                                                     [ 2.,  5.] ]
    
    
    Defined in src/operator/random/sample_op.cc:L163
    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  186. abstract def random_normal(po: random_normalParam): Array[NDArray]

    Permalink

    Draw random samples from a normal (Gaussian) distribution.
    
    .. note:: The existing alias ``normal`` is deprecated.
    
    Samples are distributed according to a normal distribution parametrized by *loc* (mean) and *scale*
    (standard deviation).
    
    Example::
    
       normal(loc=0, scale=1, shape=(2,2)) = `[ [ 1.89171135, -1.16881478],
                                              [-1.23474145,  1.55807114] ]
    
    
    Defined in src/operator/random/sample_op.cc:L112
    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  187. abstract def random_pdf_dirichlet(sample: NDArray, alpha: NDArray, is_log: Boolean, out: NDArray): Array[NDArray]

    Permalink

    Computes the value of the PDF of *sample* of
    Dirichlet distributions with parameter *alpha*.
    
    The shape of *alpha* must match the leftmost subshape of *sample*.  That is, *sample*
    can have the same shape as *alpha*, in which case the output contains one density per
    distribution, or *sample* can be a tensor of tensors with that shape, in which case
    the output is a tensor of densities such that the densities at index *i* in the output
    are given by the samples at index *i* in *sample* parameterized by the value of *alpha*
    at index *i*.
    
    Examples::
    
        random_pdf_dirichlet(sample=`[ [1,2],[2,3],[3,4] ], alpha=[2.5, 2.5]) =
            [38.413498, 199.60245, 564.56085]
    
        sample = `[ `[ [1, 2, 3], [10, 20, 30], [100, 200, 300] ],
                  `[ [0.1, 0.2, 0.3], [0.01, 0.02, 0.03], [0.001, 0.002, 0.003] ] ]
    
        random_pdf_dirichlet(sample=sample, alpha=[0.1, 0.4, 0.9]) =
            `[ [2.3257459e-02, 5.8420084e-04, 1.4674458e-05],
             [9.2589635e-01, 3.6860607e+01, 1.4674468e+03] ]
    
    
    Defined in src/operator/random/pdf_op.cc:L315
    sample

    Samples from the distributions.

    alpha

    Concentration parameters of the distributions.

    is_log

    If set, compute the density of the log-probability instead of the probability.

    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  188. abstract def random_pdf_exponential(sample: NDArray, lam: NDArray, is_log: Boolean, out: NDArray): Array[NDArray]

    Permalink

    Computes the value of the PDF of *sample* of
    exponential distributions with parameters *lam* (rate).
    
    The shape of *lam* must match the leftmost subshape of *sample*.  That is, *sample*
    can have the same shape as *lam*, in which case the output contains one density per
    distribution, or *sample* can be a tensor of tensors with that shape, in which case
    the output is a tensor of densities such that the densities at index *i* in the output
    are given by the samples at index *i* in *sample* parameterized by the value of *lam*
    at index *i*.
    
    Examples::
    
      random_pdf_exponential(sample=`[ [1, 2, 3] ], lam=[1]) =
          `[ [0.36787945, 0.13533528, 0.04978707] ]
    
      sample = `[ [1,2,3],
                [1,2,3],
                [1,2,3] ]
    
      random_pdf_exponential(sample=sample, lam=[1,0.5,0.25]) =
          `[ [0.36787945, 0.13533528, 0.04978707],
           [0.30326533, 0.18393973, 0.11156508],
           [0.1947002,  0.15163267, 0.11809164] ]
    
    
    Defined in src/operator/random/pdf_op.cc:L304
    sample

    Samples from the distributions.

    lam

    Lambda (rate) parameters of the distributions.

    is_log

    If set, compute the density of the log-probability instead of the probability.

    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  189. abstract def random_pdf_gamma(sample: NDArray, alpha: NDArray, is_log: Boolean, beta: NDArray, out: NDArray): Array[NDArray]

    Permalink

    Computes the value of the PDF of *sample* of
    gamma distributions with parameters *alpha* (shape) and *beta* (rate).
    
    *alpha* and *beta* must have the same shape, which must match the leftmost subshape
    of *sample*.  That is, *sample* can have the same shape as *alpha* and *beta*, in which
    case the output contains one density per distribution, or *sample* can be a tensor
    of tensors with that shape, in which case the output is a tensor of densities such that
    the densities at index *i* in the output are given by the samples at index *i* in *sample*
    parameterized by the values of *alpha* and *beta* at index *i*.
    
    Examples::
    
      random_pdf_gamma(sample=`[ [1,2,3,4,5] ], alpha=[5], beta=[1]) =
          `[ [0.01532831, 0.09022352, 0.16803136, 0.19536681, 0.17546739] ]
    
      sample = `[ [1, 2, 3, 4, 5],
                [2, 3, 4, 5, 6],
                [3, 4, 5, 6, 7] ]
    
      random_pdf_gamma(sample=sample, alpha=[5,6,7], beta=[1,1,1]) =
          `[ [0.01532831, 0.09022352, 0.16803136, 0.19536681, 0.17546739],
           [0.03608941, 0.10081882, 0.15629345, 0.17546739, 0.16062315],
           [0.05040941, 0.10419563, 0.14622283, 0.16062315, 0.14900276] ]
    
    
    Defined in src/operator/random/pdf_op.cc:L302
    sample

    Samples from the distributions.

    alpha

    Alpha (shape) parameters of the distributions.

    is_log

    If set, compute the density of the log-probability instead of the probability.

    beta

    Beta (scale) parameters of the distributions.

    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  190. abstract def random_pdf_generalized_negative_binomial(sample: NDArray, mu: NDArray, is_log: Boolean, alpha: NDArray, out: NDArray): Array[NDArray]

    Permalink

    Computes the value of the PDF of *sample* of
    generalized negative binomial distributions with parameters *mu* (mean)
    and *alpha* (dispersion).  This can be understood as a reparameterization of
    the negative binomial, where *k* = *1 / alpha* and *p* = *1 / (mu \* alpha + 1)*.
    
    *mu* and *alpha* must have the same shape, which must match the leftmost subshape
    of *sample*.  That is, *sample* can have the same shape as *mu* and *alpha*, in which
    case the output contains one density per distribution, or *sample* can be a tensor
    of tensors with that shape, in which case the output is a tensor of densities such that
    the densities at index *i* in the output are given by the samples at index *i* in *sample*
    parameterized by the values of *mu* and *alpha* at index *i*.
    
    Examples::
    
        random_pdf_generalized_negative_binomial(sample=`[ [1, 2, 3, 4] ], alpha=[1], mu=[1]) =
            `[ [0.25, 0.125, 0.0625, 0.03125] ]
    
        sample = `[ [1,2,3,4],
                  [1,2,3,4] ]
        random_pdf_generalized_negative_binomial(sample=sample, alpha=[1, 0.6666], mu=[1, 1.5]) =
            `[ [0.25,       0.125,      0.0625,     0.03125   ],
             [0.26517063, 0.16573331, 0.09667706, 0.05437994] ]
    
    
    Defined in src/operator/random/pdf_op.cc:L313
    sample

    Samples from the distributions.

    mu

    Means of the distributions.

    is_log

    If set, compute the density of the log-probability instead of the probability.

    alpha

    Alpha (dispersion) parameters of the distributions.

    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  191. abstract def random_pdf_negative_binomial(sample: NDArray, k: NDArray, is_log: Boolean, p: NDArray, out: NDArray): Array[NDArray]

    Permalink

    Computes the value of the PDF of samples of
    negative binomial distributions with parameters *k* (failure limit) and *p* (failure probability).
    
    *k* and *p* must have the same shape, which must match the leftmost subshape
    of *sample*.  That is, *sample* can have the same shape as *k* and *p*, in which
    case the output contains one density per distribution, or *sample* can be a tensor
    of tensors with that shape, in which case the output is a tensor of densities such that
    the densities at index *i* in the output are given by the samples at index *i* in *sample*
    parameterized by the values of *k* and *p* at index *i*.
    
    Examples::
    
        random_pdf_negative_binomial(sample=`[ [1,2,3,4] ], k=[1], p=a[0.5]) =
            `[ [0.25, 0.125, 0.0625, 0.03125] ]
    
        # Note that k may be real-valued
        sample = `[ [1,2,3,4],
                  [1,2,3,4] ]
        random_pdf_negative_binomial(sample=sample, k=[1, 1.5], p=[0.5, 0.5]) =
            `[ [0.25,       0.125,      0.0625,     0.03125   ],
             [0.26516506, 0.16572815, 0.09667476, 0.05437956] ]
    
    
    Defined in src/operator/random/pdf_op.cc:L309
    sample

    Samples from the distributions.

    k

    Limits of unsuccessful experiments.

    is_log

    If set, compute the density of the log-probability instead of the probability.

    p

    Failure probabilities in each experiment.

    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  192. abstract def random_pdf_normal(sample: NDArray, mu: NDArray, is_log: Boolean, sigma: NDArray, out: NDArray): Array[NDArray]

    Permalink

    Computes the value of the PDF of *sample* of
    normal distributions with parameters *mu* (mean) and *sigma* (standard deviation).
    
    *mu* and *sigma* must have the same shape, which must match the leftmost subshape
    of *sample*.  That is, *sample* can have the same shape as *mu* and *sigma*, in which
    case the output contains one density per distribution, or *sample* can be a tensor
    of tensors with that shape, in which case the output is a tensor of densities such that
    the densities at index *i* in the output are given by the samples at index *i* in *sample*
    parameterized by the values of *mu* and *sigma* at index *i*.
    
    Examples::
    
        sample = `[ [-2, -1, 0, 1, 2] ]
        random_pdf_normal(sample=sample, mu=[0], sigma=[1]) =
            `[ [0.05399097, 0.24197073, 0.3989423, 0.24197073, 0.05399097] ]
    
        random_pdf_normal(sample=sample*2, mu=[0,0], sigma=[1,2]) =
            `[ [0.05399097, 0.24197073, 0.3989423,  0.24197073, 0.05399097],
             [0.12098537, 0.17603266, 0.19947115, 0.17603266, 0.12098537] ]
    
    
    Defined in src/operator/random/pdf_op.cc:L299
    sample

    Samples from the distributions.

    mu

    Means of the distributions.

    is_log

    If set, compute the density of the log-probability instead of the probability.

    sigma

    Standard deviations of the distributions.

    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  193. abstract def random_pdf_poisson(sample: NDArray, lam: NDArray, is_log: Boolean, out: NDArray): Array[NDArray]

    Permalink

    Computes the value of the PDF of *sample* of
    Poisson distributions with parameters *lam* (rate).
    
    The shape of *lam* must match the leftmost subshape of *sample*.  That is, *sample*
    can have the same shape as *lam*, in which case the output contains one density per
    distribution, or *sample* can be a tensor of tensors with that shape, in which case
    the output is a tensor of densities such that the densities at index *i* in the output
    are given by the samples at index *i* in *sample* parameterized by the value of *lam*
    at index *i*.
    
    Examples::
    
        random_pdf_poisson(sample=`[ [0,1,2,3] ], lam=[1]) =
            `[ [0.36787945, 0.36787945, 0.18393973, 0.06131324] ]
    
        sample = `[ [0,1,2,3],
                  [0,1,2,3],
                  [0,1,2,3] ]
    
        random_pdf_poisson(sample=sample, lam=[1,2,3]) =
            `[ [0.36787945, 0.36787945, 0.18393973, 0.06131324],
             [0.13533528, 0.27067056, 0.27067056, 0.18044704],
             [0.04978707, 0.14936121, 0.22404182, 0.22404182] ]
    
    
    Defined in src/operator/random/pdf_op.cc:L306
    sample

    Samples from the distributions.

    lam

    Lambda (rate) parameters of the distributions.

    is_log

    If set, compute the density of the log-probability instead of the probability.

    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  194. abstract def random_pdf_uniform(sample: NDArray, low: NDArray, is_log: Boolean, high: NDArray, out: NDArray): Array[NDArray]

    Permalink

    Computes the value of the PDF of *sample* of
    uniform distributions on the intervals given by *[low,high)*.
    
    *low* and *high* must have the same shape, which must match the leftmost subshape
    of *sample*.  That is, *sample* can have the same shape as *low* and *high*, in which
    case the output contains one density per distribution, or *sample* can be a tensor
    of tensors with that shape, in which case the output is a tensor of densities such that
    the densities at index *i* in the output are given by the samples at index *i* in *sample*
    parameterized by the values of *low* and *high* at index *i*.
    
    Examples::
    
        random_pdf_uniform(sample=`[ [1,2,3,4] ], low=[0], high=[10]) = [0.1, 0.1, 0.1, 0.1]
    
        sample = `[ `[ [1, 2, 3],
                   [1, 2, 3] ],
                  `[ [1, 2, 3],
                   [1, 2, 3] ] ]
        low  = `[ [0, 0],
                [0, 0] ]
        high = `[ [ 5, 10],
                [15, 20] ]
        random_pdf_uniform(sample=sample, low=low, high=high) =
            `[ `[ [0.2,        0.2,        0.2    ],
              [0.1,        0.1,        0.1    ] ],
             `[ [0.06667,    0.06667,    0.06667],
              [0.05,       0.05,       0.05   ] ] ]
    
    
    
    Defined in src/operator/random/pdf_op.cc:L297
    sample

    Samples from the distributions.

    low

    Lower bounds of the distributions.

    is_log

    If set, compute the density of the log-probability instead of the probability.

    high

    Upper bounds of the distributions.

    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  195. abstract def random_poisson(po: random_poissonParam): Array[NDArray]

    Permalink

    Draw random samples from a Poisson distribution.
    
    Samples are distributed according to a Poisson distribution parametrized by *lambda* (rate).
    Samples will always be returned as a floating point data type.
    
    Example::
    
       poisson(lam=4, shape=(2,2)) = `[ [ 5.,  2.],
                                      [ 4.,  6.] ]
    
    
    Defined in src/operator/random/sample_op.cc:L149
    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  196. abstract def random_randint(po: random_randintParam): Array[NDArray]

    Permalink

    Draw random samples from a discrete uniform distribution.
    
    Samples are uniformly distributed over the half-open interval *[low, high)*
    (includes *low*, but excludes *high*).
    
    Example::
    
       randint(low=0, high=5, shape=(2,2)) = `[ [ 0,  2],
                                              [ 3,  1] ]
    
    
    
    Defined in src/operator/random/sample_op.cc:L193
    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  197. abstract def random_uniform(po: random_uniformParam): Array[NDArray]

    Permalink

    Draw random samples from a uniform distribution.
    
    .. note:: The existing alias ``uniform`` is deprecated.
    
    Samples are uniformly distributed over the half-open interval *[low, high)*
    (includes *low*, but excludes *high*).
    
    Example::
    
       uniform(low=0, high=1, shape=(2,2)) = `[ [ 0.60276335,  0.85794562],
                                              [ 0.54488319,  0.84725171] ]
    
    
    
    Defined in src/operator/random/sample_op.cc:L95
    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  198. abstract def ravel_multi_index(data: NDArray, shape: Shape, out: NDArray): Array[NDArray]

    Permalink

    Converts a batch of index arrays into an array of flat indices. The operator follows numpy conventions so a single multi index is given by a column of the input matrix. The leading dimension may be left unspecified by using -1 as placeholder.
    
    Examples::
    
       A = `[ [3,6,6],[4,5,1] ]
       ravel(A, shape=(7,6)) = [22,41,37]
       ravel(A, shape=(-1,6)) = [22,41,37]
    
    
    
    Defined in src/operator/tensor/ravel.cc:L41
    data

    Batch of multi-indices

    shape

    Shape of the array into which the multi-indices apply.

    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  199. abstract def rcbrt(data: NDArray, out: NDArray): Array[NDArray]

    Permalink

    Returns element-wise inverse cube-root value of the input.
    
    .. math::
       rcbrt(x) = 1/\sqrt[3]{x}
    
    Example::
    
       rcbrt([1,8,-125]) = [1.0, 0.5, -0.2]
    
    
    
    Defined in src/operator/tensor/elemwise_unary_op_pow.cc:L323
    data

    The input array.

    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  200. abstract def reciprocal(data: NDArray, out: NDArray): Array[NDArray]

    Permalink

    Returns the reciprocal of the argument, element-wise.
    
    Calculates 1/x.
    
    Example::
    
        reciprocal([-2, 1, 3, 1.6, 0.2]) = [-0.5, 1.0, 0.33333334, 0.625, 5.0]
    
    
    
    Defined in src/operator/tensor/elemwise_unary_op_pow.cc:L43
    data

    The input array.

    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  201. abstract def relu(data: NDArray, out: NDArray): Array[NDArray]

    Permalink

    Computes rectified linear activation.
    
    .. math::
       max(features, 0)
    
    The storage type of ``relu`` output depends upon the input storage type:
    
       - relu(default) = default
       - relu(row_sparse) = row_sparse
       - relu(csr) = csr
    
    
    
    Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L85
    data

    The input array.

    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  202. abstract def repeat(data: NDArray, repeats: Integer, axis: Integer, out: NDArray): Array[NDArray]

    Permalink

    Repeats elements of an array.
    By default, ``repeat`` flattens the input array into 1-D and then repeats the
    elements::
      x = `[ [ 1, 2],
           [ 3, 4] ]
      repeat(x, repeats=2) = [ 1.,  1.,  2.,  2.,  3.,  3.,  4.,  4.]
    The parameter ``axis`` specifies the axis along which to perform repeat::
      repeat(x, repeats=2, axis=1) = `[ [ 1.,  1.,  2.,  2.],
                                      [ 3.,  3.,  4.,  4.] ]
      repeat(x, repeats=2, axis=0) = `[ [ 1.,  2.],
                                      [ 1.,  2.],
                                      [ 3.,  4.],
                                      [ 3.,  4.] ]
      repeat(x, repeats=2, axis=-1) = `[ [ 1.,  1.,  2.,  2.],
                                       [ 3.,  3.,  4.,  4.] ]
    
    
    Defined in src/operator/tensor/matrix_op.cc:L743
    data

    Input data array

    repeats

    The number of repetitions for each element.

    axis

    The axis along which to repeat values. The negative numbers are interpreted counting from the backward. By default, use the flattened input array, and return a flat output array.

    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  203. abstract def reset_arrays(data: Array[NDArray], num_arrays: Integer, out: NDArray): Array[NDArray]

    Permalink

    Set to zero multiple arrays
    
    
    Defined in src/operator/contrib/reset_arrays.cc:L35
    data

    Arrays

    num_arrays

    number of input arrays.

    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  204. abstract def reshape(po: reshapeParam): Array[NDArray]

    Permalink

    Reshapes the input array.
    .. note:: ``Reshape`` is deprecated, use ``reshape``
    Given an array and a shape, this function returns a copy of the array in the new shape.
    The shape is a tuple of integers such as (2,3,4). The size of the new shape should be same as the size of the input array.
    Example::
      reshape([1,2,3,4], shape=(2,2)) = `[ [1,2], [3,4] ]
    Some dimensions of the shape can take special values from the set {0, -1, -2, -3, -4}. The significance of each is explained below:
    - ``0``  copy this dimension from the input to the output shape.
      Example::
      - input shape = (2,3,4), shape = (4,0,2), output shape = (4,3,2)
      - input shape = (2,3,4), shape = (2,0,0), output shape = (2,3,4)
    - ``-1`` infers the dimension of the output shape by using the remainder of the input dimensions
      keeping the size of the new array same as that of the input array.
      At most one dimension of shape can be -1.
      Example::
      - input shape = (2,3,4), shape = (6,1,-1), output shape = (6,1,4)
      - input shape = (2,3,4), shape = (3,-1,8), output shape = (3,1,8)
      - input shape = (2,3,4), shape=(-1,), output shape = (24,)
    - ``-2`` copy all/remainder of the input dimensions to the output shape.
      Example::
      - input shape = (2,3,4), shape = (-2,), output shape = (2,3,4)
      - input shape = (2,3,4), shape = (2,-2), output shape = (2,3,4)
      - input shape = (2,3,4), shape = (-2,1,1), output shape = (2,3,4,1,1)
    - ``-3`` use the product of two consecutive dimensions of the input shape as the output dimension.
      Example::
      - input shape = (2,3,4), shape = (-3,4), output shape = (6,4)
      - input shape = (2,3,4,5), shape = (-3,-3), output shape = (6,20)
      - input shape = (2,3,4), shape = (0,-3), output shape = (2,12)
      - input shape = (2,3,4), shape = (-3,-2), output shape = (6,4)
    - ``-4`` split one dimension of the input into two dimensions passed subsequent to -4 in shape (can contain -1).
      Example::
      - input shape = (2,3,4), shape = (-4,1,2,-2), output shape =(1,2,3,4)
      - input shape = (2,3,4), shape = (2,-4,-1,3,-2), output shape = (2,1,3,4)
    If the argument `reverse` is set to 1, then the special values are inferred from right to left.
      Example::
      - without reverse=1, for input shape = (10,5,4), shape = (-1,0), output shape would be (40,5)
      - with reverse=1, output shape will be (50,4).
    
    
    Defined in src/operator/tensor/matrix_op.cc:L174
    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  205. abstract def reshape_like(po: reshape_likeParam): Array[NDArray]

    Permalink

    Reshape some or all dimensions of `lhs` to have the same shape as some or all dimensions of `rhs`.
    
    Returns a **view** of the `lhs` array with a new shape without altering any data.
    
    Example::
    
      x = [1, 2, 3, 4, 5, 6]
      y = `[ [0, -4], [3, 2], [2, 2] ]
      reshape_like(x, y) = `[ [1, 2], [3, 4], [5, 6] ]
    
    More precise control over how dimensions are inherited is achieved by specifying \
    slices over the `lhs` and `rhs` array dimensions. Only the sliced `lhs` dimensions \
    are reshaped to the `rhs` sliced dimensions, with the non-sliced `lhs` dimensions staying the same.
    
      Examples::
    
      - lhs shape = (30,7), rhs shape = (15,2,4), lhs_begin=0, lhs_end=1, rhs_begin=0, rhs_end=2, output shape = (15,2,7)
      - lhs shape = (3, 5), rhs shape = (1,15,4), lhs_begin=0, lhs_end=2, rhs_begin=1, rhs_end=2, output shape = (15)
    
    Negative indices are supported, and `None` can be used for either `lhs_end` or `rhs_end` to indicate the end of the range.
    
      Example::
    
      - lhs shape = (30, 12), rhs shape = (4, 2, 2, 3), lhs_begin=-1, lhs_end=None, rhs_begin=1, rhs_end=None, output shape = (30, 2, 2, 3)
    
    
    
    Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L511
    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  206. abstract def reverse(data: NDArray, axis: Shape, out: NDArray): Array[NDArray]

    Permalink

    Reverses the order of elements along given axis while preserving array shape.
    Note: reverse and flip are equivalent. We use reverse in the following examples.
    Examples::
      x = `[ [ 0.,  1.,  2.,  3.,  4.],
           [ 5.,  6.,  7.,  8.,  9.] ]
      reverse(x, axis=0) = `[ [ 5.,  6.,  7.,  8.,  9.],
                            [ 0.,  1.,  2.,  3.,  4.] ]
      reverse(x, axis=1) = `[ [ 4.,  3.,  2.,  1.,  0.],
                            [ 9.,  8.,  7.,  6.,  5.] ]
    
    
    Defined in src/operator/tensor/matrix_op.cc:L831
    data

    Input data array

    axis

    The axis which to reverse elements.

    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  207. abstract def rint(data: NDArray, out: NDArray): Array[NDArray]

    Permalink

    Returns element-wise rounded value to the nearest integer of the input.
    
    .. note::
       - For input ``n.5`` ``rint`` returns ``n`` while ``round`` returns ``n+1``.
       - For input ``-n.5`` both ``rint`` and ``round`` returns ``-n-1``.
    
    Example::
    
       rint([-1.5, 1.5, -1.9, 1.9, 2.1]) = [-2.,  1., -2.,  2.,  2.]
    
    The storage type of ``rint`` output depends upon the input storage type:
    
       - rint(default) = default
       - rint(row_sparse) = row_sparse
       - rint(csr) = csr
    
    
    
    Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L798
    data

    The input array.

    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  208. abstract def rmsprop_update(po: rmsprop_updateParam): Array[NDArray]

    Permalink

    Update function for `RMSProp` optimizer.
    
    `RMSprop` is a variant of stochastic gradient descent where the gradients are
    divided by a cache which grows with the sum of squares of recent gradients?
    
    `RMSProp` is similar to `AdaGrad`, a popular variant of `SGD` which adaptively
    tunes the learning rate of each parameter. `AdaGrad` lowers the learning rate for
    each parameter monotonically over the course of training.
    While this is analytically motivated for convex optimizations, it may not be ideal
    for non-convex problems. `RMSProp` deals with this heuristically by allowing the
    learning rates to rebound as the denominator decays over time.
    
    Define the Root Mean Square (RMS) error criterion of the gradient as
    :math:`RMS[g]_t = \sqrt{E[g^2]_t + \epsilon}`, where :math:`g` represents
    gradient and :math:`E[g^2]_t` is the decaying average over past squared gradient.
    
    The :math:`E[g^2]_t` is given by:
    
    .. math::
      E[g^2]_t = \gamma * E[g^2]_{t-1} + (1-\gamma) * g_t^2
    
    The update step is
    
    .. math::
      \theta_{t+1} = \theta_t - \frac{\eta}{RMS[g]_t} g_t
    
    The RMSProp code follows the version in
    http://www.cs.toronto.edu/~tijmen/csc321/slides/lecture_slides_lec6.pdf
    Tieleman & Hinton, 2012.
    
    Hinton suggests the momentum term :math:`\gamma` to be 0.9 and the learning rate
    :math:`\eta` to be 0.001.
    
    
    
    Defined in src/operator/optimizer_op.cc:L796
    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  209. abstract def rmspropalex_update(po: rmspropalex_updateParam): Array[NDArray]

    Permalink

    Update function for RMSPropAlex optimizer.
    
    `RMSPropAlex` is non-centered version of `RMSProp`.
    
    Define :math:`E[g^2]_t` is the decaying average over past squared gradient and
    :math:`E[g]_t` is the decaying average over past gradient.
    
    .. math::
      E[g^2]_t = \gamma_1 * E[g^2]_{t-1} + (1 - \gamma_1) * g_t^2\\
      E[g]_t = \gamma_1 * E[g]_{t-1} + (1 - \gamma_1) * g_t\\
      \Delta_t = \gamma_2 * \Delta_{t-1} - \frac{\eta}{\sqrt{E[g^2]_t - E[g]_t^2 + \epsilon}} g_t\\
    
    The update step is
    
    .. math::
      \theta_{t+1} = \theta_t + \Delta_t
    
    The RMSPropAlex code follows the version in
    http://arxiv.org/pdf/1308.0850v5.pdf Eq(38) - Eq(45) by Alex Graves, 2013.
    
    Graves suggests the momentum term :math:`\gamma_1` to be 0.95, :math:`\gamma_2`
    to be 0.9 and the learning rate :math:`\eta` to be 0.0001.
    
    
    Defined in src/operator/optimizer_op.cc:L835
    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  210. abstract def round(data: NDArray, out: NDArray): Array[NDArray]

    Permalink

    Returns element-wise rounded value to the nearest integer of the input.
    
    Example::
    
       round([-1.5, 1.5, -1.9, 1.9, 2.1]) = [-2.,  2., -2.,  2.,  2.]
    
    The storage type of ``round`` output depends upon the input storage type:
    
      - round(default) = default
      - round(row_sparse) = row_sparse
      - round(csr) = csr
    
    
    
    Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L777
    data

    The input array.

    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  211. abstract def rsqrt(data: NDArray, out: NDArray): Array[NDArray]

    Permalink

    Returns element-wise inverse square-root value of the input.
    
    .. math::
       rsqrt(x) = 1/\sqrt{x}
    
    Example::
    
       rsqrt([4,9,16]) = [0.5, 0.33333334, 0.25]
    
    The storage type of ``rsqrt`` output is always dense
    
    
    
    Defined in src/operator/tensor/elemwise_unary_op_pow.cc:L221
    data

    The input array.

    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  212. abstract def sample_exponential(po: sample_exponentialParam): Array[NDArray]

    Permalink

    Concurrent sampling from multiple
    exponential distributions with parameters lambda (rate).
    
    The parameters of the distributions are provided as an input array.
    Let *[s]* be the shape of the input array, *n* be the dimension of *[s]*, *[t]*
    be the shape specified as the parameter of the operator, and *m* be the dimension
    of *[t]*. Then the output will be a *(n+m)*-dimensional array with shape *[s]x[t]*.
    
    For any valid *n*-dimensional index *i* with respect to the input array, *output[i]*
    will be an *m*-dimensional array that holds randomly drawn samples from the distribution
    which is parameterized by the input value at index *i*. If the shape parameter of the
    operator is not set, then one sample will be drawn per distribution and the output array
    has the same shape as the input array.
    
    Examples::
    
       lam = [ 1.0, 8.5 ]
    
       // Draw a single sample for each distribution
       sample_exponential(lam) = [ 0.51837951,  0.09994757]
    
       // Draw a vector containing two samples for each distribution
       sample_exponential(lam, shape=(2)) = `[ [ 0.51837951,  0.19866663],
                                             [ 0.09994757,  0.50447971] ]
    
    
    Defined in src/operator/random/multisample_op.cc:L283
    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  213. abstract def sample_gamma(po: sample_gammaParam): Array[NDArray]

    Permalink

    Concurrent sampling from multiple
    gamma distributions with parameters *alpha* (shape) and *beta* (scale).
    
    The parameters of the distributions are provided as input arrays.
    Let *[s]* be the shape of the input arrays, *n* be the dimension of *[s]*, *[t]*
    be the shape specified as the parameter of the operator, and *m* be the dimension
    of *[t]*. Then the output will be a *(n+m)*-dimensional array with shape *[s]x[t]*.
    
    For any valid *n*-dimensional index *i* with respect to the input arrays, *output[i]*
    will be an *m*-dimensional array that holds randomly drawn samples from the distribution
    which is parameterized by the input values at index *i*. If the shape parameter of the
    operator is not set, then one sample will be drawn per distribution and the output array
    has the same shape as the input arrays.
    
    Examples::
    
       alpha = [ 0.0, 2.5 ]
       beta = [ 1.0, 0.7 ]
    
       // Draw a single sample for each distribution
       sample_gamma(alpha, beta) = [ 0.        ,  2.25797319]
    
       // Draw a vector containing two samples for each distribution
       sample_gamma(alpha, beta, shape=(2)) = `[ [ 0.        ,  0.        ],
                                               [ 2.25797319,  1.70734084] ]
    
    
    Defined in src/operator/random/multisample_op.cc:L281
    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  214. abstract def sample_generalized_negative_binomial(po: sample_generalized_negative_binomialParam): Array[NDArray]

    Permalink

    Concurrent sampling from multiple
    generalized negative binomial distributions with parameters *mu* (mean) and *alpha* (dispersion).
    
    The parameters of the distributions are provided as input arrays.
    Let *[s]* be the shape of the input arrays, *n* be the dimension of *[s]*, *[t]*
    be the shape specified as the parameter of the operator, and *m* be the dimension
    of *[t]*. Then the output will be a *(n+m)*-dimensional array with shape *[s]x[t]*.
    
    For any valid *n*-dimensional index *i* with respect to the input arrays, *output[i]*
    will be an *m*-dimensional array that holds randomly drawn samples from the distribution
    which is parameterized by the input values at index *i*. If the shape parameter of the
    operator is not set, then one sample will be drawn per distribution and the output array
    has the same shape as the input arrays.
    
    Samples will always be returned as a floating point data type.
    
    Examples::
    
       mu = [ 2.0, 2.5 ]
       alpha = [ 1.0, 0.1 ]
    
       // Draw a single sample for each distribution
       sample_generalized_negative_binomial(mu, alpha) = [ 0.,  3.]
    
       // Draw a vector containing two samples for each distribution
       sample_generalized_negative_binomial(mu, alpha, shape=(2)) = `[ [ 0.,  3.],
                                                                     [ 3.,  1.] ]
    
    
    Defined in src/operator/random/multisample_op.cc:L292
    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  215. abstract def sample_multinomial(po: sample_multinomialParam): Array[NDArray]

    Permalink

    Concurrent sampling from multiple multinomial distributions.
    
    *data* is an *n* dimensional array whose last dimension has length *k*, where
    *k* is the number of possible outcomes of each multinomial distribution. This
    operator will draw *shape* samples from each distribution. If shape is empty
    one sample will be drawn from each distribution.
    
    If *get_prob* is true, a second array containing log likelihood of the drawn
    samples will also be returned. This is usually used for reinforcement learning
    where you can provide reward as head gradient for this array to estimate
    gradient.
    
    Note that the input distribution must be normalized, i.e. *data* must sum to
    1 along its last axis.
    
    Examples::
    
       probs = `[ [0, 0.1, 0.2, 0.3, 0.4], [0.4, 0.3, 0.2, 0.1, 0] ]
    
       // Draw a single sample for each distribution
       sample_multinomial(probs) = [3, 0]
    
       // Draw a vector containing two samples for each distribution
       sample_multinomial(probs, shape=(2)) = `[ [4, 2],
                                               [0, 0] ]
    
       // requests log likelihood
       sample_multinomial(probs, get_prob=True) = [2, 1], [0.2, 0.3]
    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  216. abstract def sample_negative_binomial(po: sample_negative_binomialParam): Array[NDArray]

    Permalink

    Concurrent sampling from multiple
    negative binomial distributions with parameters *k* (failure limit) and *p* (failure probability).
    
    The parameters of the distributions are provided as input arrays.
    Let *[s]* be the shape of the input arrays, *n* be the dimension of *[s]*, *[t]*
    be the shape specified as the parameter of the operator, and *m* be the dimension
    of *[t]*. Then the output will be a *(n+m)*-dimensional array with shape *[s]x[t]*.
    
    For any valid *n*-dimensional index *i* with respect to the input arrays, *output[i]*
    will be an *m*-dimensional array that holds randomly drawn samples from the distribution
    which is parameterized by the input values at index *i*. If the shape parameter of the
    operator is not set, then one sample will be drawn per distribution and the output array
    has the same shape as the input arrays.
    
    Samples will always be returned as a floating point data type.
    
    Examples::
    
       k = [ 20, 49 ]
       p = [ 0.4 , 0.77 ]
    
       // Draw a single sample for each distribution
       sample_negative_binomial(k, p) = [ 15.,  16.]
    
       // Draw a vector containing two samples for each distribution
       sample_negative_binomial(k, p, shape=(2)) = `[ [ 15.,  50.],
                                                    [ 16.,  12.] ]
    
    
    Defined in src/operator/random/multisample_op.cc:L288
    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  217. abstract def sample_normal(po: sample_normalParam): Array[NDArray]

    Permalink

    Concurrent sampling from multiple
    normal distributions with parameters *mu* (mean) and *sigma* (standard deviation).
    
    The parameters of the distributions are provided as input arrays.
    Let *[s]* be the shape of the input arrays, *n* be the dimension of *[s]*, *[t]*
    be the shape specified as the parameter of the operator, and *m* be the dimension
    of *[t]*. Then the output will be a *(n+m)*-dimensional array with shape *[s]x[t]*.
    
    For any valid *n*-dimensional index *i* with respect to the input arrays, *output[i]*
    will be an *m*-dimensional array that holds randomly drawn samples from the distribution
    which is parameterized by the input values at index *i*. If the shape parameter of the
    operator is not set, then one sample will be drawn per distribution and the output array
    has the same shape as the input arrays.
    
    Examples::
    
       mu = [ 0.0, 2.5 ]
       sigma = [ 1.0, 3.7 ]
    
       // Draw a single sample for each distribution
       sample_normal(mu, sigma) = [-0.56410581,  0.95934606]
    
       // Draw a vector containing two samples for each distribution
       sample_normal(mu, sigma, shape=(2)) = `[ [-0.56410581,  0.2928229 ],
                                              [ 0.95934606,  4.48287058] ]
    
    
    Defined in src/operator/random/multisample_op.cc:L278
    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  218. abstract def sample_poisson(po: sample_poissonParam): Array[NDArray]

    Permalink

    Concurrent sampling from multiple
    Poisson distributions with parameters lambda (rate).
    
    The parameters of the distributions are provided as an input array.
    Let *[s]* be the shape of the input array, *n* be the dimension of *[s]*, *[t]*
    be the shape specified as the parameter of the operator, and *m* be the dimension
    of *[t]*. Then the output will be a *(n+m)*-dimensional array with shape *[s]x[t]*.
    
    For any valid *n*-dimensional index *i* with respect to the input array, *output[i]*
    will be an *m*-dimensional array that holds randomly drawn samples from the distribution
    which is parameterized by the input value at index *i*. If the shape parameter of the
    operator is not set, then one sample will be drawn per distribution and the output array
    has the same shape as the input array.
    
    Samples will always be returned as a floating point data type.
    
    Examples::
    
       lam = [ 1.0, 8.5 ]
    
       // Draw a single sample for each distribution
       sample_poisson(lam) = [  0.,  13.]
    
       // Draw a vector containing two samples for each distribution
       sample_poisson(lam, shape=(2)) = `[ [  0.,   4.],
                                         [ 13.,   8.] ]
    
    
    Defined in src/operator/random/multisample_op.cc:L285
    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  219. abstract def sample_uniform(po: sample_uniformParam): Array[NDArray]

    Permalink

    Concurrent sampling from multiple
    uniform distributions on the intervals given by *[low,high)*.
    
    The parameters of the distributions are provided as input arrays.
    Let *[s]* be the shape of the input arrays, *n* be the dimension of *[s]*, *[t]*
    be the shape specified as the parameter of the operator, and *m* be the dimension
    of *[t]*. Then the output will be a *(n+m)*-dimensional array with shape *[s]x[t]*.
    
    For any valid *n*-dimensional index *i* with respect to the input arrays, *output[i]*
    will be an *m*-dimensional array that holds randomly drawn samples from the distribution
    which is parameterized by the input values at index *i*. If the shape parameter of the
    operator is not set, then one sample will be drawn per distribution and the output array
    has the same shape as the input arrays.
    
    Examples::
    
       low = [ 0.0, 2.5 ]
       high = [ 1.0, 3.7 ]
    
       // Draw a single sample for each distribution
       sample_uniform(low, high) = [ 0.40451524,  3.18687344]
    
       // Draw a vector containing two samples for each distribution
       sample_uniform(low, high, shape=(2)) = `[ [ 0.40451524,  0.18017688],
                                               [ 3.18687344,  3.68352246] ]
    
    
    Defined in src/operator/random/multisample_op.cc:L276
    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  220. abstract def scatter_nd(data: NDArray, indices: NDArray, shape: Shape, out: NDArray): Array[NDArray]

    Permalink

    Scatters data into a new tensor according to indices.
    
    Given `data` with shape `(Y_0, ..., Y_{K-1}, X_M, ..., X_{N-1})` and indices with shape
    `(M, Y_0, ..., Y_{K-1})`, the output will have shape `(X_0, X_1, ..., X_{N-1})`,
    where `M <= N`. If `M == N`, data shape should simply be `(Y_0, ..., Y_{K-1})`.
    
    The elements in output is defined as follows::
    
      output[indices[0, y_0, ..., y_{K-1}],
             ...,
             indices[M-1, y_0, ..., y_{K-1}],
             x_M, ..., x_{N-1}] = data[y_0, ..., y_{K-1}, x_M, ..., x_{N-1}]
    
    all other entries in output are 0.
    
    .. warning::
    
        If the indices have duplicates, the result will be non-deterministic and
        the gradient of `scatter_nd` will not be correct!!
    
    
    Examples::
    
      data = [2, 3, 0]
      indices = `[ [1, 1, 0], [0, 1, 0] ]
      shape = (2, 2)
      scatter_nd(data, indices, shape) = `[ [0, 0], [2, 3] ]
    
      data = `[ `[ [1, 2], [3, 4] ], `[ [5, 6], [7, 8] ] ]
      indices = `[ [0, 1], [1, 1] ]
      shape = (2, 2, 2, 2)
      scatter_nd(data, indices, shape) = `[ [`[ [0, 0],
                                            [0, 0] ],
    
                                           `[ [1, 2],
                                            [3, 4] ] ],
    
                                          `[ `[ [0, 0],
                                            [0, 0] ],
    
                                           `[ [5, 6],
                                            [7, 8] ] ] ]
    data

    data

    indices

    indices

    shape

    Shape of output.

    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  221. abstract def sgd_mom_update(po: sgd_mom_updateParam): Array[NDArray]

    Permalink

    Momentum update function for Stochastic Gradient Descent (SGD) optimizer.
    
    Momentum update has better convergence rates on neural networks. Mathematically it looks
    like below:
    
    .. math::
    
      v_1 = \alpha * \nabla J(W_0)\\
      v_t = \gamma v_{t-1} - \alpha * \nabla J(W_{t-1})\\
      W_t = W_{t-1} + v_t
    
    It updates the weights using::
    
      v = momentum * v - learning_rate * gradient
      weight += v
    
    Where the parameter ``momentum`` is the decay rate of momentum estimates at each epoch.
    
    However, if grad's storage type is ``row_sparse``, ``lazy_update`` is True and weight's storage
    type is the same as momentum's storage type,
    only the row slices whose indices appear in grad.indices are updated (for both weight and momentum)::
    
      for row in gradient.indices:
          v[row] = momentum[row] * v[row] - learning_rate * gradient[row]
          weight[row] += v[row]
    
    
    
    Defined in src/operator/optimizer_op.cc:L564
    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  222. abstract def sgd_update(po: sgd_updateParam): Array[NDArray]

    Permalink

    Update function for Stochastic Gradient Descent (SGD) optimizer.
    
    It updates the weights using::
    
     weight = weight - learning_rate * (gradient + wd * weight)
    
    However, if gradient is of ``row_sparse`` storage type and ``lazy_update`` is True,
    only the row slices whose indices appear in grad.indices are updated::
    
     for row in gradient.indices:
         weight[row] = weight[row] - learning_rate * (gradient[row] + wd * weight[row])
    
    
    
    Defined in src/operator/optimizer_op.cc:L523
    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  223. abstract def shape_array(data: NDArray, out: NDArray): Array[NDArray]

    Permalink

    Returns a 1D int64 array containing the shape of data.
    
    Example::
    
      shape_array(`[ [1,2,3,4], [5,6,7,8] ]) = [2,4]
    
    
    
    Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L573
    data

    Input Array.

    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  224. abstract def shuffle(data: NDArray, out: NDArray): Array[NDArray]

    Permalink

    Randomly shuffle the elements.
    
    This shuffles the array along the first axis.
    The order of the elements in each subarray does not change.
    For example, if a 2D array is given, the order of the rows randomly changes,
    but the order of the elements in each row does not change.
    data

    Data to be shuffled.

    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  225. abstract def sigmoid(data: NDArray, out: NDArray): Array[NDArray]

    Permalink

    Computes sigmoid of x element-wise.
    
    .. math::
       y = 1 / (1 + exp(-x))
    
    The storage type of ``sigmoid`` output is always dense
    
    
    
    Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L119
    data

    The input array.

    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  226. abstract def sign(data: NDArray, out: NDArray): Array[NDArray]

    Permalink

    Returns element-wise sign of the input.
    
    Example::
    
       sign([-2, 0, 3]) = [-1, 0, 1]
    
    The storage type of ``sign`` output depends upon the input storage type:
    
       - sign(default) = default
       - sign(row_sparse) = row_sparse
       - sign(csr) = csr
    
    
    
    Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L758
    data

    The input array.

    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  227. abstract def signsgd_update(po: signsgd_updateParam): Array[NDArray]

    Permalink

    Update function for SignSGD optimizer.
    
    .. math::
    
     g_t = \nabla J(W_{t-1})\\
     W_t = W_{t-1} - \eta_t \text{sign}(g_t)
    
    It updates the weights using::
    
     weight = weight - learning_rate * sign(gradient)
    
    .. note::
       - sparse ndarray not supported for this optimizer yet.
    
    
    Defined in src/operator/optimizer_op.cc:L62
    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  228. abstract def signum_update(po: signum_updateParam): Array[NDArray]

    Permalink

    SIGN momentUM (Signum) optimizer.
    
    .. math::
    
     g_t = \nabla J(W_{t-1})\\
     m_t = \beta m_{t-1} + (1 - \beta) g_t\\
     W_t = W_{t-1} - \eta_t \text{sign}(m_t)
    
    It updates the weights using::
     state = momentum * state + (1-momentum) * gradient
     weight = weight - learning_rate * sign(state)
    
    Where the parameter ``momentum`` is the decay rate of momentum estimates at each epoch.
    
    .. note::
       - sparse ndarray not supported for this optimizer yet.
    
    
    Defined in src/operator/optimizer_op.cc:L91
    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  229. abstract def sin(data: NDArray, out: NDArray): Array[NDArray]

    Permalink

    Computes the element-wise sine of the input array.
    
    The input should be in radians (:math:`2\pi` rad equals 360 degrees).
    
    .. math::
       sin([0, \pi/4, \pi/2]) = [0, 0.707, 1]
    
    The storage type of ``sin`` output depends upon the input storage type:
    
       - sin(default) = default
       - sin(row_sparse) = row_sparse
       - sin(csr) = csr
    
    
    
    Defined in src/operator/tensor/elemwise_unary_op_trig.cc:L47
    data

    The input array.

    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  230. abstract def sinh(data: NDArray, out: NDArray): Array[NDArray]

    Permalink

    Returns the hyperbolic sine of the input array, computed element-wise.
    
    .. math::
       sinh(x) = 0.5\times(exp(x) - exp(-x))
    
    The storage type of ``sinh`` output depends upon the input storage type:
    
       - sinh(default) = default
       - sinh(row_sparse) = row_sparse
       - sinh(csr) = csr
    
    
    
    Defined in src/operator/tensor/elemwise_unary_op_trig.cc:L371
    data

    The input array.

    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  231. abstract def size_array(data: NDArray, out: NDArray): Array[NDArray]

    Permalink

    Returns a 1D int64 array containing the size of data.
    
    Example::
    
      size_array(`[ [1,2,3,4], [5,6,7,8] ]) = [8]
    
    
    
    Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L624
    data

    Input Array.

    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  232. abstract def slice(data: NDArray, begin: Shape, end: Shape, step: Shape, out: NDArray): Array[NDArray]

    Permalink

    Slices a region of the array.
    .. note:: ``crop`` is deprecated. Use ``slice`` instead.
    This function returns a sliced array between the indices given
    by `begin` and `end` with the corresponding `step`.
    For an input array of ``shape=(d_0, d_1, ..., d_n-1)``,
    slice operation with ``begin=(b_0, b_1...b_m-1)``,
    ``end=(e_0, e_1, ..., e_m-1)``, and ``step=(s_0, s_1, ..., s_m-1)``,
    where m <= n, results in an array with the shape
    ``(|e_0-b_0|/|s_0|, ..., |e_m-1-b_m-1|/|s_m-1|, d_m, ..., d_n-1)``.
    The resulting array's *k*-th dimension contains elements
    from the *k*-th dimension of the input array starting
    from index ``b_k`` (inclusive) with step ``s_k``
    until reaching ``e_k`` (exclusive).
    If the *k*-th elements are `None` in the sequence of `begin`, `end`,
    and `step`, the following rule will be used to set default values.
    If `s_k` is `None`, set `s_k=1`. If `s_k > 0`, set `b_k=0`, `e_k=d_k`;
    else, set `b_k=d_k-1`, `e_k=-1`.
    The storage type of ``slice`` output depends on storage types of inputs
    - slice(csr) = csr
    - otherwise, ``slice`` generates output with default storage
    .. note:: When input data storage type is csr, it only supports
       step=(), or step=(None,), or step=(1,) to generate a csr output.
       For other step parameter values, it falls back to slicing
       a dense tensor.
    Example::
      x = `[ [  1.,   2.,   3.,   4.],
           [  5.,   6.,   7.,   8.],
           [  9.,  10.,  11.,  12.] ]
      slice(x, begin=(0,1), end=(2,4)) = `[ [ 2.,  3.,  4.],
                                         [ 6.,  7.,  8.] ]
      slice(x, begin=(None, 0), end=(None, 3), step=(-1, 2)) = `[ [9., 11.],
                                                                [5.,  7.],
                                                                [1.,  3.] ]
    
    
    Defined in src/operator/tensor/matrix_op.cc:L481
    data

    Source input

    begin

    starting indices for the slice operation, supports negative indices.

    end

    ending indices for the slice operation, supports negative indices.

    step

    step for the slice operation, supports negative values.

    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  233. abstract def slice_axis(data: NDArray, axis: Integer, begin: Integer, end: Integer, out: NDArray): Array[NDArray]

    Permalink

    Slices along a given axis.
    Returns an array slice along a given `axis` starting from the `begin` index
    to the `end` index.
    Examples::
      x = `[ [  1.,   2.,   3.,   4.],
           [  5.,   6.,   7.,   8.],
           [  9.,  10.,  11.,  12.] ]
      slice_axis(x, axis=0, begin=1, end=3) = `[ [  5.,   6.,   7.,   8.],
                                               [  9.,  10.,  11.,  12.] ]
      slice_axis(x, axis=1, begin=0, end=2) = `[ [  1.,   2.],
                                               [  5.,   6.],
                                               [  9.,  10.] ]
      slice_axis(x, axis=1, begin=-3, end=-1) = `[ [  2.,   3.],
                                                 [  6.,   7.],
                                                 [ 10.,  11.] ]
    
    
    Defined in src/operator/tensor/matrix_op.cc:L570
    data

    Source input

    axis

    Axis along which to be sliced, supports negative indexes.

    begin

    The beginning index along the axis to be sliced, supports negative indexes.

    end

    The ending index along the axis to be sliced, supports negative indexes.

    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  234. abstract def slice_like(data: NDArray, shape_like: NDArray, axes: Shape, out: NDArray): Array[NDArray]

    Permalink

    Slices a region of the array like the shape of another array.
    This function is similar to ``slice``, however, the `begin` are always `0`s
    and `end` of specific axes are inferred from the second input `shape_like`.
    Given the second `shape_like` input of ``shape=(d_0, d_1, ..., d_n-1)``,
    a ``slice_like`` operator with default empty `axes`, it performs the
    following operation:
    `` out = slice(input, begin=(0, 0, ..., 0), end=(d_0, d_1, ..., d_n-1))``.
    When `axes` is not empty, it is used to speficy which axes are being sliced.
    Given a 4-d input data, ``slice_like`` operator with ``axes=(0, 2, -1)``
    will perform the following operation:
    `` out = slice(input, begin=(0, 0, 0, 0), end=(d_0, None, d_2, d_3))``.
    Note that it is allowed to have first and second input with different dimensions,
    however, you have to make sure the `axes` are specified and not exceeding the
    dimension limits.
    For example, given `input_1` with ``shape=(2,3,4,5)`` and `input_2` with
    ``shape=(1,2,3)``, it is not allowed to use:
    `` out = slice_like(a, b)`` because ndim of `input_1` is 4, and ndim of `input_2`
    is 3.
    The following is allowed in this situation:
    `` out = slice_like(a, b, axes=(0, 2))``
    Example::
      x = `[ [  1.,   2.,   3.,   4.],
           [  5.,   6.,   7.,   8.],
           [  9.,  10.,  11.,  12.] ]
      y = `[ [  0.,   0.,   0.],
           [  0.,   0.,   0.] ]
      slice_like(x, y) = `[ [ 1.,  2.,  3.]
                          [ 5.,  6.,  7.] ]
      slice_like(x, y, axes=(0, 1)) = `[ [ 1.,  2.,  3.]
                                       [ 5.,  6.,  7.] ]
      slice_like(x, y, axes=(0)) = `[ [ 1.,  2.,  3.,  4.]
                                    [ 5.,  6.,  7.,  8.] ]
      slice_like(x, y, axes=(-1)) = `[ [  1.,   2.,   3.]
                                     [  5.,   6.,   7.]
                                     [  9.,  10.,  11.] ]
    
    
    Defined in src/operator/tensor/matrix_op.cc:L624
    data

    Source input

    shape_like

    Shape like input

    axes

    List of axes on which input data will be sliced according to the corresponding size of the second input. By default will slice on all axes. Negative axes are supported.

    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  235. abstract def smooth_l1(data: NDArray, scalar: Float, out: NDArray): Array[NDArray]

    Permalink

    Calculate Smooth L1 Loss(lhs, scalar) by summing
    
    .. math::
    
        f(x) =
        \begin{cases}
        (\sigma x)^2/2,& \text{if }x < 1/\sigma^2\\
        |x|-0.5/\sigma^2,& \text{otherwise}
        \end{cases}
    
    where :math:`x` is an element of the tensor *lhs* and :math:`\sigma` is the scalar.
    
    Example::
    
      smooth_l1([1, 2, 3, 4]) = [0.5, 1.5, 2.5, 3.5]
      smooth_l1([1, 2, 3, 4], scalar=1) = [0.5, 1.5, 2.5, 3.5]
    
    
    
    Defined in src/operator/tensor/elemwise_binary_scalar_op_extended.cc:L108
    data

    source input

    scalar

    scalar input

    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  236. abstract def softmax(po: softmaxParam): Array[NDArray]

    Permalink

    Applies the softmax function.
    
    The resulting array contains elements in the range (0,1) and the elements along the given axis sum up to 1.
    
    .. math::
       softmax(\mathbf{z/t})_j = \frac{e^{z_j/t}}{\sum_{k=1}^K e^{z_k/t}}
    
    for :math:`j = 1, ..., K`
    
    t is the temperature parameter in softmax function. By default, t equals 1.0
    
    Example::
    
      x = `[ [ 1.  1.  1.]
           [ 1.  1.  1.] ]
    
      softmax(x,axis=0) = `[ [ 0.5  0.5  0.5]
                           [ 0.5  0.5  0.5] ]
    
      softmax(x,axis=1) = `[ [ 0.33333334,  0.33333334,  0.33333334],
                           [ 0.33333334,  0.33333334,  0.33333334] ]
    
    
    
    Defined in src/operator/nn/softmax.cc:L135
    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  237. abstract def softmax_cross_entropy(data: NDArray, label: NDArray, out: NDArray): Array[NDArray]

    Permalink

    Calculate cross entropy of softmax output and one-hot label.
    
    - This operator computes the cross entropy in two steps:
      - Applies softmax function on the input array.
      - Computes and returns the cross entropy loss between the softmax output and the labels.
    
    - The softmax function and cross entropy loss is given by:
    
      - Softmax Function:
    
      .. math:: \text{softmax}(x)_i = \frac{exp(x_i)}{\sum_j exp(x_j)}
    
      - Cross Entropy Function:
    
      .. math:: \text{CE(label, output)} = - \sum_i \text{label}_i \log(\text{output}_i)
    
    Example::
    
      x = `[ [1, 2, 3],
           [11, 7, 5] ]
    
      label = [2, 0]
    
      softmax(x) = `[ [0.09003057, 0.24472848, 0.66524094],
                    [0.97962922, 0.01794253, 0.00242826] ]
    
      softmax_cross_entropy(data, label) = - log(0.66524084) - log(0.97962922) = 0.4281871
    
    
    
    Defined in src/operator/loss_binary_op.cc:L58
    data

    Input data

    label

    Input label

    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  238. abstract def softmin(po: softminParam): Array[NDArray]

    Permalink

    Applies the softmin function.
    
    The resulting array contains elements in the range (0,1) and the elements along the given axis sum
    up to 1.
    
    .. math::
       softmin(\mathbf{z/t})_j = \frac{e^{-z_j/t}}{\sum_{k=1}^K e^{-z_k/t}}
    
    for :math:`j = 1, ..., K`
    
    t is the temperature parameter in softmax function. By default, t equals 1.0
    
    Example::
    
      x = `[ [ 1.  2.  3.]
           [ 3.  2.  1.] ]
    
      softmin(x,axis=0) = `[ [ 0.88079703,  0.5,  0.11920292],
                           [ 0.11920292,  0.5,  0.88079703] ]
    
      softmin(x,axis=1) = `[ [ 0.66524094,  0.24472848,  0.09003057],
                           [ 0.09003057,  0.24472848,  0.66524094] ]
    
    
    
    Defined in src/operator/nn/softmin.cc:L56
    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  239. abstract def softsign(data: NDArray, out: NDArray): Array[NDArray]

    Permalink

    Computes softsign of x element-wise.
    
    .. math::
       y = x / (1 + abs(x))
    
    The storage type of ``softsign`` output is always dense
    
    
    
    Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L191
    data

    The input array.

    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  240. abstract def sort(po: sortParam): Array[NDArray]

    Permalink

    Returns a sorted copy of an input array along the given axis.
    
    Examples::
    
      x = `[ [ 1, 4],
           [ 3, 1] ]
    
      // sorts along the last axis
      sort(x) = `[ [ 1.,  4.],
                 [ 1.,  3.] ]
    
      // flattens and then sorts
      sort(x, axis=None) = [ 1.,  1.,  3.,  4.]
    
      // sorts along the first axis
      sort(x, axis=0) = `[ [ 1.,  1.],
                         [ 3.,  4.] ]
    
      // in a descend order
      sort(x, is_ascend=0) = `[ [ 4.,  1.],
                              [ 3.,  1.] ]
    
    
    
    Defined in src/operator/tensor/ordering_op.cc:L132
    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  241. abstract def space_to_depth(data: NDArray, block_size: Integer, out: NDArray): Array[NDArray]

    Permalink

    Rearranges(permutes) blocks of spatial data into depth.
    Similar to ONNX SpaceToDepth operator:
    https://github.com/onnx/onnx/blob/master/docs/Operators.md#SpaceToDepth
    The output is a new tensor where the values from height and width dimension are
    moved to the depth dimension. The reverse of this operation is ``depth_to_space``.
    .. math::
        \begin{gather*}
        x \prime = reshape(x, [N, C, H / block\_size, block\_size, W / block\_size, block\_size]) \\
        x \prime \prime = transpose(x \prime, [0, 3, 5, 1, 2, 4]) \\
        y = reshape(x \prime \prime, [N, C * (block\_size ^ 2), H / block\_size, W / block\_size])
        \end{gather*}
    where :math:`x` is an input tensor with default layout as :math:`[N, C, H, W]`: [batch, channels, height, width]
    and :math:`y` is the output tensor of layout :math:`[N, C * (block\_size ^ 2), H / block\_size, W / block\_size]`
    Example::
      x = `[ [`[ [0, 6, 1, 7, 2, 8],
             [12, 18, 13, 19, 14, 20],
             [3, 9, 4, 10, 5, 11],
             [15, 21, 16, 22, 17, 23] ] ] ]
      space_to_depth(x, 2) = `[ [`[ [0, 1, 2],
                                [3, 4, 5] ],
                               `[ [6, 7, 8],
                                [9, 10, 11] ],
                               `[ [12, 13, 14],
                                [15, 16, 17] ],
                               `[ [18, 19, 20],
                                [21, 22, 23] ] ] ]
    
    
    Defined in src/operator/tensor/matrix_op.cc:L1018
    data

    Input ndarray

    block_size

    Blocks of [block_size. block_size] are moved

    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  242. abstract def split(po: splitParam): Array[NDArray]

    Permalink

    Splits an array along a particular axis into multiple sub-arrays.
    
    .. note:: ``SliceChannel`` is deprecated. Use ``split`` instead.
    
    **Note** that `num_outputs` should evenly divide the length of the axis
    along which to split the array.
    
    Example::
    
       x  = `[ `[ [ 1.]
              [ 2.] ]
             `[ [ 3.]
              [ 4.] ]
             `[ [ 5.]
              [ 6.] ] ]
       x.shape = (3, 2, 1)
    
       y = split(x, axis=1, num_outputs=2) // a list of 2 arrays with shape (3, 1, 1)
       y = `[ `[ [ 1.] ]
            `[ [ 3.] ]
            `[ [ 5.] ] ]
    
           `[ `[ [ 2.] ]
            `[ [ 4.] ]
            `[ [ 6.] ] ]
    
       y[0].shape = (3, 1, 1)
    
       z = split(x, axis=0, num_outputs=3) // a list of 3 arrays with shape (1, 2, 1)
       z = `[ `[ [ 1.]
             [ 2.] ] ]
    
           `[ `[ [ 3.]
             [ 4.] ] ]
    
           `[ `[ [ 5.]
             [ 6.] ] ]
    
       z[0].shape = (1, 2, 1)
    
    `squeeze_axis=1` removes the axis with length 1 from the shapes of the output arrays.
    **Note** that setting `squeeze_axis` to ``1`` removes axis with length 1 only
    along the `axis` which it is split.
    Also `squeeze_axis` can be set to true only if ``input.shape[axis] == num_outputs``.
    
    Example::
    
       z = split(x, axis=0, num_outputs=3, squeeze_axis=1) // a list of 3 arrays with shape (2, 1)
       z = `[ [ 1.]
            [ 2.] ]
    
           `[ [ 3.]
            [ 4.] ]
    
           `[ [ 5.]
            [ 6.] ]
       z[0].shape = (2 ,1 )
    
    
    
    Defined in src/operator/slice_channel.cc:L106
    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  243. abstract def sqrt(data: NDArray, out: NDArray): Array[NDArray]

    Permalink

    Returns element-wise square-root value of the input.
    
    .. math::
       \textrm{sqrt}(x) = \sqrt{x}
    
    Example::
    
       sqrt([4, 9, 16]) = [2, 3, 4]
    
    The storage type of ``sqrt`` output depends upon the input storage type:
    
       - sqrt(default) = default
       - sqrt(row_sparse) = row_sparse
       - sqrt(csr) = csr
    
    
    
    Defined in src/operator/tensor/elemwise_unary_op_pow.cc:L170
    data

    The input array.

    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  244. abstract def square(data: NDArray, out: NDArray): Array[NDArray]

    Permalink

    Returns element-wise squared value of the input.
    
    .. math::
       square(x) = x^2
    
    Example::
    
       square([2, 3, 4]) = [4, 9, 16]
    
    The storage type of ``square`` output depends upon the input storage type:
    
       - square(default) = default
       - square(row_sparse) = row_sparse
       - square(csr) = csr
    
    
    
    Defined in src/operator/tensor/elemwise_unary_op_pow.cc:L119
    data

    The input array.

    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  245. abstract def squeeze(data: NDArray, axis: Shape, out: NDArray): Array[NDArray]

    Permalink

    Remove single-dimensional entries from the shape of an array.
    Same behavior of defining the output tensor shape as numpy.squeeze for the most of cases.
    See the following note for exception.
    Examples::
      data = `[ `[ [0], [1], [2] ] ]
      squeeze(data) = [0, 1, 2]
      squeeze(data, axis=0) = `[ [0], [1], [2] ]
      squeeze(data, axis=2) = `[ [0, 1, 2] ]
      squeeze(data, axis=(0, 2)) = [0, 1, 2]
    .. Note::
      The output of this operator will keep at least one dimension not removed. For example,
      squeeze(`[ `[ [4] ] ]) = [4], while in numpy.squeeze, the output will become a scalar.
    data

    data to squeeze

    axis

    Selects a subset of the single-dimensional entries in the shape. If an axis is selected with shape entry greater than one, an error is raised.

    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  246. abstract def stack(data: Array[NDArray], axis: Integer, num_args: Integer, out: NDArray): Array[NDArray]

    Permalink

    Join a sequence of arrays along a new axis.
    The axis parameter specifies the index of the new axis in the dimensions of the
    result. For example, if axis=0 it will be the first dimension and if axis=-1 it
    will be the last dimension.
    Examples::
      x = [1, 2]
      y = [3, 4]
      stack(x, y) = `[ [1, 2],
                     [3, 4] ]
      stack(x, y, axis=1) = `[ [1, 3],
                             [2, 4] ]
    data

    List of arrays to stack

    axis

    The axis in the result array along which the input arrays are stacked.

    num_args

    Number of inputs to be stacked.

    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  247. abstract def stop_gradient(data: NDArray, out: NDArray): Array[NDArray]

    Permalink

    Stops gradient computation.
    
    Stops the accumulated gradient of the inputs from flowing through this operator
    in the backward direction. In other words, this operator prevents the contribution
    of its inputs to be taken into account for computing gradients.
    
    Example::
    
      v1 = [1, 2]
      v2 = [0, 1]
      a = Variable('a')
      b = Variable('b')
      b_stop_grad = stop_gradient(3 * b)
      loss = MakeLoss(b_stop_grad + a)
    
      executor = loss.simple_bind(ctx=cpu(), a=(1,2), b=(1,2))
      executor.forward(is_train=True, a=v1, b=v2)
      executor.outputs
      [ 1.  5.]
    
      executor.backward()
      executor.grad_arrays
      [ 0.  0.]
      [ 1.  1.]
    
    
    
    Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L325
    data

    The input array.

    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  248. abstract def sum(po: sumParam): Array[NDArray]

    Permalink

    Computes the sum of array elements over given axes.
    
    .. Note::
    
      `sum` and `sum_axis` are equivalent.
      For ndarray of csr storage type summation along axis 0 and axis 1 is supported.
      Setting keepdims or exclude to True will cause a fallback to dense operator.
    
    Example::
    
      data = `[ `[ [1, 2], [2, 3], [1, 3] ],
              `[ [1, 4], [4, 3], [5, 2] ],
              `[ [7, 1], [7, 2], [7, 3] ] ]
    
      sum(data, axis=1)
      `[ [  4.   8.]
       [ 10.   9.]
       [ 21.   6.] ]
    
      sum(data, axis=[1,2])
      [ 12.  19.  27.]
    
      data = `[ [1, 2, 0],
              [3, 0, 1],
              [4, 1, 0] ]
    
      csr = cast_storage(data, 'csr')
    
      sum(csr, axis=0)
      [ 8.  3.  1.]
    
      sum(csr, axis=1)
      [ 3.  4.  5.]
    
    
    
    Defined in src/operator/tensor/broadcast_reduce_sum_value.cc:L66
    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  249. abstract def sum_axis(po: sum_axisParam): Array[NDArray]

    Permalink

    Computes the sum of array elements over given axes.
    
    .. Note::
    
      `sum` and `sum_axis` are equivalent.
      For ndarray of csr storage type summation along axis 0 and axis 1 is supported.
      Setting keepdims or exclude to True will cause a fallback to dense operator.
    
    Example::
    
      data = `[ `[ [1, 2], [2, 3], [1, 3] ],
              `[ [1, 4], [4, 3], [5, 2] ],
              `[ [7, 1], [7, 2], [7, 3] ] ]
    
      sum(data, axis=1)
      `[ [  4.   8.]
       [ 10.   9.]
       [ 21.   6.] ]
    
      sum(data, axis=[1,2])
      [ 12.  19.  27.]
    
      data = `[ [1, 2, 0],
              [3, 0, 1],
              [4, 1, 0] ]
    
      csr = cast_storage(data, 'csr')
    
      sum(csr, axis=0)
      [ 8.  3.  1.]
    
      sum(csr, axis=1)
      [ 3.  4.  5.]
    
    
    
    Defined in src/operator/tensor/broadcast_reduce_sum_value.cc:L66
    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  250. abstract def swapaxes(po: swapaxesParam): Array[NDArray]

    Permalink

    Interchanges two axes of an array.
    
    Examples::
    
      x = `[ [1, 2, 3] ])
      swapaxes(x, 0, 1) = `[ [ 1],
                           [ 2],
                           [ 3] ]
    
      x = `[ `[ [ 0, 1],
            [ 2, 3] ],
           `[ [ 4, 5],
            [ 6, 7] ] ]  // (2,2,2) array
    
     swapaxes(x, 0, 2) = `[ `[ [ 0, 4],
                           [ 2, 6] ],
                          `[ [ 1, 5],
                           [ 3, 7] ] ]
    
    
    Defined in src/operator/swapaxis.cc:L69
    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  251. abstract def take(po: takeParam): Array[NDArray]

    Permalink

    Takes elements from an input array along the given axis.
    
    This function slices the input array along a particular axis with the provided indices.
    
    Given data tensor of rank r >= 1, and indices tensor of rank q, gather entries of the axis
    dimension of data (by default outer-most one as axis=0) indexed by indices, and concatenates them
    in an output tensor of rank q + (r - 1).
    
    Examples::
    
      x = [4.  5.  6.]
    
      // Trivial case, take the second element along the first axis.
    
      take(x, [1]) = [ 5. ]
    
      // The other trivial case, axis=-1, take the third element along the first axis
    
      take(x, [3], axis=-1, mode='clip') = [ 6. ]
    
      x = `[ [ 1.,  2.],
           [ 3.,  4.],
           [ 5.,  6.] ]
    
      // In this case we will get rows 0 and 1, then 1 and 2. Along axis 0
    
      take(x, `[ [0,1],[1,2] ]) = `[ `[ [ 1.,  2.],
                                 [ 3.,  4.] ],
    
                                `[ [ 3.,  4.],
                                 [ 5.,  6.] ] ]
    
      // In this case we will get rows 0 and 1, then 1 and 2 (calculated by wrapping around).
      // Along axis 1
    
      take(x, `[ [0, 3], [-1, -2] ], axis=1, mode='wrap') = `[ `[ [ 1.  2.]
                                                           [ 2.  1.] ]
    
                                                          `[ [ 3.  4.]
                                                           [ 4.  3.] ]
    
                                                          `[ [ 5.  6.]
                                                           [ 6.  5.] ] ]
    
    The storage type of ``take`` output depends upon the input storage type:
    
       - take(default, default) = default
       - take(csr, default, axis=0) = csr
    
    
    
    Defined in src/operator/tensor/indexing_op.cc:L776
    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  252. abstract def tan(data: NDArray, out: NDArray): Array[NDArray]

    Permalink

    Computes the element-wise tangent of the input array.
    
    The input should be in radians (:math:`2\pi` rad equals 360 degrees).
    
    .. math::
       tan([0, \pi/4, \pi/2]) = [0, 1, -inf]
    
    The storage type of ``tan`` output depends upon the input storage type:
    
       - tan(default) = default
       - tan(row_sparse) = row_sparse
       - tan(csr) = csr
    
    
    
    Defined in src/operator/tensor/elemwise_unary_op_trig.cc:L140
    data

    The input array.

    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  253. abstract def tanh(data: NDArray, out: NDArray): Array[NDArray]

    Permalink

    Returns the hyperbolic tangent of the input array, computed element-wise.
    
    .. math::
       tanh(x) = sinh(x) / cosh(x)
    
    The storage type of ``tanh`` output depends upon the input storage type:
    
       - tanh(default) = default
       - tanh(row_sparse) = row_sparse
       - tanh(csr) = csr
    
    
    
    Defined in src/operator/tensor/elemwise_unary_op_trig.cc:L451
    data

    The input array.

    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  254. abstract def tile(data: NDArray, reps: Shape, out: NDArray): Array[NDArray]

    Permalink

    Repeats the whole array multiple times.
    If ``reps`` has length *d*, and input array has dimension of *n*. There are
    three cases:
    - **n=d**. Repeat *i*-th dimension of the input by ``reps[i]`` times::
        x = `[ [1, 2],
             [3, 4] ]
        tile(x, reps=(2,3)) = `[ [ 1.,  2.,  1.,  2.,  1.,  2.],
                               [ 3.,  4.,  3.,  4.,  3.,  4.],
                               [ 1.,  2.,  1.,  2.,  1.,  2.],
                               [ 3.,  4.,  3.,  4.,  3.,  4.] ]
    - **n>d**. ``reps`` is promoted to length *n* by pre-pending 1's to it. Thus for
      an input shape ``(2,3)``, ``repos=(2,)`` is treated as ``(1,2)``::
        tile(x, reps=(2,)) = `[ [ 1.,  2.,  1.,  2.],
                              [ 3.,  4.,  3.,  4.] ]
    - **n<d**. The input is promoted to be d-dimensional by prepending new axes. So a
      shape ``(2,2)`` array is promoted to ``(1,2,2)`` for 3-D replication::
        tile(x, reps=(2,2,3)) = `[ `[ [ 1.,  2.,  1.,  2.,  1.,  2.],
                                  [ 3.,  4.,  3.,  4.,  3.,  4.],
                                  [ 1.,  2.,  1.,  2.,  1.,  2.],
                                  [ 3.,  4.,  3.,  4.,  3.,  4.] ],
                                 `[ [ 1.,  2.,  1.,  2.,  1.,  2.],
                                  [ 3.,  4.,  3.,  4.,  3.,  4.],
                                  [ 1.,  2.,  1.,  2.,  1.,  2.],
                                  [ 3.,  4.,  3.,  4.,  3.,  4.] ] ]
    
    
    Defined in src/operator/tensor/matrix_op.cc:L795
    data

    Input data array

    reps

    The number of times for repeating the tensor a. Each dim size of reps must be a positive integer. If reps has length d, the result will have dimension of max(d, a.ndim); If a.ndim < d, a is promoted to be d-dimensional by prepending new axes. If a.ndim > d, reps is promoted to a.ndim by pre-pending 1's to it.

    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  255. abstract def topk(po: topkParam): Array[NDArray]

    Permalink

    Returns the indices of the top *k* elements in an input array along the given
     axis (by default).
     If ret_type is set to 'value' returns the value of top *k* elements (instead of indices).
     In case of ret_type = 'both', both value and index would be returned.
     The returned elements will be sorted.
    
    Examples::
    
      x = `[ [ 0.3,  0.2,  0.4],
           [ 0.1,  0.3,  0.2] ]
    
      // returns an index of the largest element on last axis
      topk(x) = `[ [ 2.],
                 [ 1.] ]
    
      // returns the value of top-2 largest elements on last axis
      topk(x, ret_typ='value', k=2) = `[ [ 0.4,  0.3],
                                       [ 0.3,  0.2] ]
    
      // returns the value of top-2 smallest elements on last axis
      topk(x, ret_typ='value', k=2, is_ascend=1) = `[ [ 0.2 ,  0.3],
                                                   [ 0.1 ,  0.2] ]
    
      // returns the value of top-2 largest elements on axis 0
      topk(x, axis=0, ret_typ='value', k=2) = `[ [ 0.3,  0.3,  0.4],
                                               [ 0.1,  0.2,  0.2] ]
    
      // flattens and then returns list of both values and indices
      topk(x, ret_typ='both', k=2) = `[ `[ [ 0.4,  0.3], [ 0.3,  0.2] ] ,  `[ [ 2.,  0.], [ 1.,  2.] ] ]
    
    
    
    Defined in src/operator/tensor/ordering_op.cc:L67
    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  256. abstract def transpose(data: NDArray, axes: Shape, out: NDArray): Array[NDArray]

    Permalink

    Permutes the dimensions of an array.
    Examples::
      x = `[ [ 1, 2],
           [ 3, 4] ]
      transpose(x) = `[ [ 1.,  3.],
                      [ 2.,  4.] ]
      x = `[ `[ [ 1.,  2.],
            [ 3.,  4.] ],
           `[ [ 5.,  6.],
            [ 7.,  8.] ] ]
      transpose(x) = `[ `[ [ 1.,  5.],
                       [ 3.,  7.] ],
                      `[ [ 2.,  6.],
                       [ 4.,  8.] ] ]
      transpose(x, axes=(1,0,2)) = `[ `[ [ 1.,  2.],
                                     [ 5.,  6.] ],
                                    `[ [ 3.,  4.],
                                     [ 7.,  8.] ] ]
    
    
    Defined in src/operator/tensor/matrix_op.cc:L327
    data

    Source input

    axes

    Target axis order. By default the axes will be inverted.

    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  257. abstract def trunc(data: NDArray, out: NDArray): Array[NDArray]

    Permalink

    Return the element-wise truncated value of the input.
    
    The truncated value of the scalar x is the nearest integer i which is closer to
    zero than x is. In short, the fractional part of the signed number x is discarded.
    
    Example::
    
       trunc([-2.1, -1.9, 1.5, 1.9, 2.1]) = [-2., -1.,  1.,  1.,  2.]
    
    The storage type of ``trunc`` output depends upon the input storage type:
    
       - trunc(default) = default
       - trunc(row_sparse) = row_sparse
       - trunc(csr) = csr
    
    
    
    Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L856
    data

    The input array.

    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  258. abstract def uniform(po: uniformParam): Array[NDArray]

    Permalink

    Draw random samples from a uniform distribution.
    
    .. note:: The existing alias ``uniform`` is deprecated.
    
    Samples are uniformly distributed over the half-open interval *[low, high)*
    (includes *low*, but excludes *high*).
    
    Example::
    
       uniform(low=0, high=1, shape=(2,2)) = `[ [ 0.60276335,  0.85794562],
                                              [ 0.54488319,  0.84725171] ]
    
    
    
    Defined in src/operator/random/sample_op.cc:L95
    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  259. abstract def unravel_index(data: NDArray, shape: Shape, out: NDArray): Array[NDArray]

    Permalink

    Converts an array of flat indices into a batch of index arrays. The operator follows numpy conventions so a single multi index is given by a column of the output matrix. The leading dimension may be left unspecified by using -1 as placeholder.
    
    Examples::
    
       A = [22,41,37]
       unravel(A, shape=(7,6)) = `[ [3,6,6],[4,5,1] ]
       unravel(A, shape=(-1,6)) = `[ [3,6,6],[4,5,1] ]
    
    
    
    Defined in src/operator/tensor/ravel.cc:L67
    data

    Array of flat indices

    shape

    Shape of the array into which the multi-indices apply.

    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  260. abstract def where(condition: NDArray, x: NDArray, y: NDArray, out: NDArray): Array[NDArray]

    Permalink

    Return the elements, either from x or y, depending on the condition.
    
    Given three ndarrays, condition, x, and y, return an ndarray with the elements from x or y,
    depending on the elements from condition are true or false. x and y must have the same shape.
    If condition has the same shape as x, each element in the output array is from x if the
    corresponding element in the condition is true, and from y if false.
    
    If condition does not have the same shape as x, it must be a 1D array whose size is
    the same as x's first dimension size. Each row of the output array is from x's row
    if the corresponding element from condition is true, and from y's row if false.
    
    Note that all non-zero values are interpreted as ``True`` in condition.
    
    Examples::
    
      x = `[ [1, 2], [3, 4] ]
      y = `[ [5, 6], [7, 8] ]
      cond = `[ [0, 1], [-1, 0] ]
    
      where(cond, x, y) = `[ [5, 2], [3, 8] ]
    
      csr_cond = cast_storage(cond, 'csr')
    
      where(csr_cond, x, y) = `[ [5, 2], [3, 8] ]
    
    
    
    Defined in src/operator/tensor/control_flow_op.cc:L56
    condition

    condition array

    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()
  261. abstract def zeros_like(data: NDArray, out: NDArray): Array[NDArray]

    Permalink

    Return an array of zeros with the same shape, type and storage type
    as the input array.
    
    The storage type of ``zeros_like`` output depends on the storage type of the input
    
    - zeros_like(row_sparse) = row_sparse
    - zeros_like(csr) = csr
    - zeros_like(default) = default
    
    Examples::
    
      x = `[ [ 1.,  1.,  1.],
           [ 1.,  1.,  1.] ]
    
      zeros_like(x) = `[ [ 0.,  0.,  0.],
                       [ 0.,  0.,  0.] ]
    data

    The input

    returns

    Array[org.apache.mxnet.javaapi.NDArray]

    Annotations
    @Experimental()

Concrete Value Members

  1. final def !=(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  2. final def ##(): Int

    Permalink
    Definition Classes
    AnyRef → Any
  3. final def ==(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  4. final def asInstanceOf[T0]: T0

    Permalink
    Definition Classes
    Any
  5. def clone(): AnyRef

    Permalink
    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  6. final def eq(arg0: AnyRef): Boolean

    Permalink
    Definition Classes
    AnyRef
  7. def equals(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  8. def finalize(): Unit

    Permalink
    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( classOf[java.lang.Throwable] )
  9. final def getClass(): Class[_]

    Permalink
    Definition Classes
    AnyRef → Any
  10. def hashCode(): Int

    Permalink
    Definition Classes
    AnyRef → Any
  11. final def isInstanceOf[T0]: Boolean

    Permalink
    Definition Classes
    Any
  12. final def ne(arg0: AnyRef): Boolean

    Permalink
    Definition Classes
    AnyRef
  13. final def notify(): Unit

    Permalink
    Definition Classes
    AnyRef
  14. final def notifyAll(): Unit

    Permalink
    Definition Classes
    AnyRef
  15. final def synchronized[T0](arg0: ⇒ T0): T0

    Permalink
    Definition Classes
    AnyRef
  16. def toString(): String

    Permalink
    Definition Classes
    AnyRef → Any
  17. final def wait(): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  18. final def wait(arg0: Long, arg1: Int): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  19. final def wait(arg0: Long): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )

Inherited from AnyRef

Inherited from Any

Ungrouped