Implementation of the Quadratic loss function.
More...
#include <quadratic.h>
|
| double | loss (const Eigen::MatrixXd &eta, const Eigen::MatrixXd &y) |
| | Calculates the quadratic (least-squares) loss.
|
| |
| double | dual (const Eigen::MatrixXd &theta, const Eigen::MatrixXd &y, const Eigen::VectorXd &w) |
| | Computes the dual function for the quadratic loss.
|
| |
| Eigen::MatrixXd | hessianDiagonal (const Eigen::MatrixXd &eta) |
| | Calculates hessian diagonal.
|
| |
| Eigen::MatrixXd | preprocessResponse (const Eigen::MatrixXd &y) |
| | Preprocesses the response for the quadratic model.
|
| |
| void | updateWeightsAndWorkingResponse (Eigen::MatrixXd &w, Eigen::MatrixXd &z, const Eigen::MatrixXd &eta, const Eigen::MatrixXd &y) |
| | Updates weights and working response for IRLS algorithm.
|
| |
| Eigen::MatrixXd | link (const Eigen::MatrixXd &mu) |
| | The link function.
|
| |
| Eigen::MatrixXd | inverseLink (const Eigen::MatrixXd &eta) |
| | The link function, also known as the mean function.
|
| |
| Eigen::MatrixXd | predict (const Eigen::MatrixXd &eta) |
| | Return predicted response, which is the same as the linear predictor.
|
| |
|
virtual | ~Loss ()=default |
| | Destructor for the Loss class.
|
| |
| Eigen::MatrixXd | residual (const Eigen::MatrixXd &eta, const Eigen::MatrixXd &y) |
| | Calculates the generalized residual.
|
| |
| virtual void | updateIntercept (Eigen::VectorXd &beta0, const Eigen::MatrixXd &eta, const Eigen::MatrixXd &y) |
| | Updates the intercept with a gradient descent update.
|
| |
| virtual double | deviance (const Eigen::MatrixXd &eta, const Eigen::MatrixXd &y) |
| | Computes deviance, which is 2 times the difference between the loglikelihood of the model and the loglikelihood of the null (intercept-only) model.
|
| |
|
| | Loss (double lipschitz_constant) |
| | Constructs an loss function with specified Lipschitz constant.
|
| |
Implementation of the Quadratic loss function.
The Quadratic class provides methods for computing loss, dual function, residuals, and weight updates for the Quadratic case in the SLOPE algorithm. It is particularly suited for regression problems where the error terms are assumed to follow a normal distribution.
- Note
- This class inherits from the base Loss class and implements all required virtual functions.
Definition at line 27 of file quadratic.h.
◆ Quadratic()
| slope::Quadratic::Quadratic |
( |
| ) |
|
|
inlineexplicit |
◆ dual()
| double slope::Quadratic::dual |
( |
const Eigen::MatrixXd & |
theta, |
|
|
const Eigen::MatrixXd & |
y, |
|
|
const Eigen::VectorXd & |
w |
|
) |
| |
|
virtual |
Computes the dual function for the quadratic loss.
Calculates the Fenchel conjugate of the quadratic loss function
- Parameters
-
| theta | Dual variables vector (n x 1) |
| y | Observed values vector (n x 1) |
| w | Observation weights vector (n x 1) |
- Returns
- Double precision dual value
- See also
- loss() for the primal function
Implements slope::Loss.
◆ hessianDiagonal()
| Eigen::MatrixXd slope::Quadratic::hessianDiagonal |
( |
const Eigen::MatrixXd & |
eta | ) |
|
|
virtual |
Calculates hessian diagonal.
- Parameters
-
- Returns
- A matrix of ones (n x m)
Implements slope::Loss.
◆ inverseLink()
| Eigen::MatrixXd slope::Quadratic::inverseLink |
( |
const Eigen::MatrixXd & |
eta | ) |
|
|
virtual |
The link function, also known as the mean function.
- Parameters
-
- Returns
- The identity function.
Implements slope::Loss.
◆ link()
| Eigen::MatrixXd slope::Quadratic::link |
( |
const Eigen::MatrixXd & |
mu | ) |
|
|
virtual |
The link function.
- Parameters
-
| mu | Mean of the distribution. |
- Returns
- The identity function.
Implements slope::Loss.
◆ loss()
| double slope::Quadratic::loss |
( |
const Eigen::MatrixXd & |
eta, |
|
|
const Eigen::MatrixXd & |
y |
|
) |
| |
|
virtual |
Calculates the quadratic (least-squares) loss.
Computes the squared error loss between predicted and actual values, normalized by twice the number of observations.
- Parameters
-
| eta | Vector of predicted values (n x 1) |
| y | Matrix of actual values (n x 1) |
- Returns
- Double precision loss value
- Note
- The loss is calculated as: \( \frac{1}{2n} \sum_{i=1}^n (\eta_i -
y_i)^2 \)
Implements slope::Loss.
◆ predict()
| Eigen::MatrixXd slope::Quadratic::predict |
( |
const Eigen::MatrixXd & |
eta | ) |
|
|
virtual |
Return predicted response, which is the same as the linear predictor.
- Parameters
-
- Returns
- The predicted response.
Implements slope::Loss.
◆ preprocessResponse()
| Eigen::MatrixXd slope::Quadratic::preprocessResponse |
( |
const Eigen::MatrixXd & |
y | ) |
|
|
virtual |
Preprocesses the response for the quadratic model.
Doesn't perform any transformation on the response.
- Parameters
-
| y | Responnse vector (n x 1) |
- Returns
- Modified response
Implements slope::Loss.
◆ updateWeightsAndWorkingResponse()
| void slope::Quadratic::updateWeightsAndWorkingResponse |
( |
Eigen::MatrixXd & |
w, |
|
|
Eigen::MatrixXd & |
z, |
|
|
const Eigen::MatrixXd & |
eta, |
|
|
const Eigen::MatrixXd & |
y |
|
) |
| |
|
virtual |
Updates weights and working response for IRLS algorithm.
For quadratic case, weights are set to 1 and working response equals the original response. This implementation is particularly simple compared to other GLM families.
- Parameters
-
| [out] | w | Weights vector to be updated (n x 1) |
| [out] | z | Working response vector to be updated (n x 1) |
| [in] | eta | Current predictions vector (n x 1) |
| [in] | y | Matrix of observed values (n x 1) |
- Note
- For quadratic regression, this is particularly simple as weights remain constant and working response equals the original response
Reimplemented from slope::Loss.
The documentation for this class was generated from the following file: