Per-Coordinate Learning Rate beta.
Clip gradient to the range of [-clip_gradient, clip_gradient] If clip_gradient <= 0, gradient clipping is turned off. grad = max(min(grad, clip_gradient), -clip_gradient).
The L1 regularization coefficient.
Rescale gradient to grad = rescale_grad*grad.
Weight decay augments the objective function with a regularization term that penalizes large weights. The penalty scales with the square of the magnitude of each weight.
This Param Object is specifically used for ftrl_update