Base class of a learning rate scheduler
Base class of a learning rate scheduler
The training progress is presented by num_update
, which can be roughly
viewed as the number of minibatches executed so far. Its value is
non-decreasing, and increases at most by one.
The exact value is the upper bound of the number of updates applied to a weight/index.
Int, the maximal number of updates applied to a weight.
Float, the factor for reducing the learning rate
Float, the factor for reducing the learning rate
Int, schedule learning rate after n updates
Int, schedule learning rate after n updates
Class for reducing learning rate in factor
Assume the weight has been updated by n times, then the learning rate will be base_lr * factor^^(floor(n/step))