# mx.nd.multi.mp.sgd.mom.update¶

## Description¶

Momentum update function for multi-precision Stochastic Gradient Descent (SGD) optimizer.

Momentum update has better convergence rates on neural networks. Mathematically it looks like below:

$\begin{split}v_1 = \alpha * \nabla J(W_0)\\ v_t = \gamma v_{t-1} - \alpha * \nabla J(W_{t-1})\\ W_t = W_{t-1} + v_t\end{split}$

v = momentum * v - learning_rate * gradient
weight += v

Where the parameter momentum is the decay rate of momentum estimates at each epoch.


## Arguments¶

Argument

Description

data

NDArray-or-Symbol[].

Weights

lrs

tuple of <float>, required.

Learning rates.

wds

tuple of <float>, required.

Weight decay augments the objective function with a regularization term that penalizes large weights. The penalty scales with the square of the magnitude of each weight.

momentum

float, optional, default=0.

The decay rate of momentum estimates at each epoch.

rescale.grad

float, optional, default=1.

clip.gradient

float, optional, default=-1.

num.weights
out The result mx.ndarray