LASSO and Elastic Net Computation¶
Batch Processing¶
LASSO and Elastic Net algorithms follow the general workflow described in Regression Usage Model.
Training¶
For a description of common input and output parameters, refer to Regression Usage Model. Both LASSO and Elastic Net algorithms have the following input parameters in addition to the common input parameters:
Input ID |
Input |
---|---|
|
Optional input. Pointer to the \(1 \times n\) numeric table with weights of samples. The input can be an object of any class derived from NumericTable except for PackedTriangularMatrix, PackedSymmetricMatrix, and CSRNumericTable. By default, all weights are equal to 1. |
|
Optional input. Pointer to the \(p \times p\) numeric table with pre-computed Gram Matrix. The input can be an object of any class derived from NumericTable except for CSRNumericTable. By default, the table is set to an empty numeric table. It is used only when the number of features is less than the number of observations. |
Chosse the appropriate tab to see the parameters used in LASSO and Elastic Net batch training algorithms:
Parameter |
Default Value |
Description |
---|---|---|
|
|
The floating-point type that the algorithm uses for intermediate computations. Can be |
|
|
The computation method used by the LASSO regression. The only training method supported so far is the default dense method. |
|
|
A flag that indicates whether or not to compute |
|
A numeric table of size \(1 \times 1\) that contains the default LASSO parameter equal to \(0.1\). |
\(L_1\) coefficients: \(\lambda_i\) A numeric table of size \(1 \times k\) (where \(k\) is the number of dependent variables) or \(1 \times 1\). The contents of the table depend on its size:
This parameter can be an object of any class derived from NumericTable, except for PackedTriangularMatrix, PackedSymmetricMatrix, and CSRNumericTable. |
|
Optimization procedure used at the training stage. |
|
|
\(0\) |
The 64-bit integer flag that specifies which extra characteristics of the LASSO regression to compute. Provide the following value to request a characteristic:
|
|
|
A flag that indicates a permission to overwrite input data. Provide the following value to restrict or allow modification of input data:
|
Parameter |
Default Value |
Description |
---|---|---|
|
|
The floating-point type that the algorithm uses for intermediate computations. Can be |
|
|
The computation method used by the Elastic Net regression. The only training method supported so far is the default dense method. |
|
|
A flag that indicates whether or not to compute |
|
A umeric table of size \(1 \times 1\) that contains the default Elastic Net parameter equal to \(0.5\). |
L1 regularization coefficient (penaltyL1 is \(\lambda_1\) as described in Elastic Net). The numeric table of size \(1 \times k\) (where \(k\) is the number of dependent variables) or \(1 \times 1\). The contents of the table depend on its size:
This parameter can be an object of any class derived from NumericTable, except for PackedTriangularMatrix, PackedSymmetricMatrix, and CSRNumericTable. |
|
A numeric table of size \(1 \times 1\) that contains the default Elastic Net parameter equal to \(0.5\). |
L2 regularization coefficient (penaltyL2 is \(\lambda_2\) as described in Elastic Net). The numeric table of size \(1 \times k\) (where \(k\) is the number of dependent variables) or \(1 \times 1\). The contents of the table depend on its size:
This parameter can be an object of any class derived from NumericTable, except for PackedTriangularMatrix, PackedSymmetricMatrix, and CSRNumericTable. |
|
Optimization procedure used at the training stage. |
|
|
\(0\) |
The 64-bit integer flag that specifies which extra characteristics of the Elastic Net regression to compute. Provide the following value to request a characteristic:
|
|
|
A flag that indicates a permission to overwrite input data. Provide the following value to restrict or allow modification of input data:
|
Note
Common combinations of Elastic Net regularization parameters [Friedman2010] might be computed as shown below:
compromise between L1 (lasso penalty) and L2 (ridge-regression penalty) regularization:
\[\text{alpha} = \frac{\text{penaltyL1}}{\text{penaltyL1} + \text{penaltyL2}}\]control full regularization:
\[\text{lambda} = \text{penaltyL1} + \text{penaltyL2}\]
In addition, both LASSO and Elastic Net algorithms have the following optional results:
Result ID |
Result |
---|---|
|
Pointer to the computed Gram Matrix with size \(p \times p\) |
Prediction¶
For a description of the input and output, refer to Regression Usage Model.
At the prediction stage, LASSO and Elastic Net algorithms have the following parameters:
Parameter |
Default Value |
Description |
---|---|---|
|
|
The floating-point type that the algorithm uses for intermediate computations. Can be |
|
|
Default performance-oriented computation method, the only method supported by the regression-based prediction. |
Examples¶
C++: lasso_reg_dense_batch.cpp
Java*: LassoRegDenseBatch.java
C++: elastic_net_dense_batch.cpp
Java*: ElasticNetDenseBatch.java
Performance Considerations¶
For better performance when the number of samples is larger than the number of features in the training data set, certain coordinates of gradient and Hessian are computed via the component of Gram matrix. When the number of features is larger than the number of observations, the cost of each iteration via Gram matrix depends on the number of features. In this case, computation is performed via residual update [Friedman2010].
To get the best overall performance for LASSO and Elastic Net training, do the following:
If the number of features is less than the number of samples, use homogenous table.
If the number of features is greater than the number of samples, use SOA layout rather than AOS layout.