public interface RegularizationPenalty
extends scala.Serializable
Regularization penalties are used to restrict the optimization problem to solutions with certain desirable characteristics, such as sparsity for the L1 penalty, or penalizing large weights for the L2 penalty.
The regularization term, R(w)
is added to the objective function, f(w) = L(w) + lambda*R(w)
where lambda is the regularization parameter used to tune the amount of regularization applied.
Modifier and Type | Method and Description |
---|---|
double |
regLoss(double oldLoss,
Vector weightVector,
double regularizationConstant)
Adds regularization to the loss value
|
Vector |
takeStep(Vector weightVector,
Vector gradient,
double regularizationConstant,
double learningRate)
Calculates the new weights based on the gradient and regularization penalty
|
Vector takeStep(Vector weightVector, Vector gradient, double regularizationConstant, double learningRate)
Weights are updated using the gradient descent step w - learningRate * gradient
with w
being the weight vector.
weightVector
- The weights to be updatedgradient
- The gradient used to update the weightsregularizationConstant
- The regularization parameter to be appliedlearningRate
- The effective step size for this iterationdouble regLoss(double oldLoss, Vector weightVector, double regularizationConstant)
oldLoss
- The loss to be updatedweightVector
- The weights used to update the lossregularizationConstant
- The regularization parameter to be appliedCopyright © 2014–2018 The Apache Software Foundation. All rights reserved.