mx.symbol.preloaded_multi_mp_sgd_mom_update¶
Description¶
Momentum update function for multi-precision Stochastic Gradient Descent (SGD) optimizer.
Momentum update has better convergence rates on neural networks. Mathematically it looks like below:
It updates the weights using:
v = momentum * v - learning_rate * gradient
weight += v
Where the parameter ``momentum`` is the decay rate of momentum estimates at each epoch.
Usage¶
mx.symbol.preloaded_multi_mp_sgd_mom_update(...)
Arguments¶
Argument  | 
Description  | 
|---|---|
  | 
NDArray-or-Symbol[]. Weights, gradients, momentums, learning rates and weight decays  | 
  | 
float, optional, default=0. The decay rate of momentum estimates at each epoch.  | 
  | 
float, optional, default=1. Rescale gradient to grad = rescale_grad*grad.  | 
  | 
float, optional, default=-1. Clip gradient to the range of [-clip_gradient, clip_gradient] If clip_gradient <= 0, gradient clipping is turned off. grad = max(min(grad, clip_gradient), -clip_gradient).  | 
  | 
int, optional, default=’1’. Number of updated weights.  | 
  | 
string, optional. Name of the resulting symbol.  | 
Value¶
out The result mx.symbol
Link to Source Code: http://github.com/apache/incubator-mxnet/blob/1.6.0/src/operator/contrib/preloaded_multi_sgd.cc#L200
