Skip to content

LASSO Regression Explained with Examples Narender Kumar Spark By {Examples}

  • by

In this blog, we will see the techniques used to overcome overfitting for a lasso regression model. Regularization is one of the methods widely used to make your model more generalized.

1. What is Lasso Regression?

Lasso regression (short for “Least Absolute Shrinkage and Selection Operator”) is a type of linear regression that is used for feature selection and regularization. It works by adding a penalty term to the standard linear regression cost function, which encourages the model to use fewer variables or features in the final regression equation. The penalty term is based on the absolute values of the coefficients in the equation, which means that some of the coefficients may be set to zero if they are deemed to be less important.

The lasso regression model can be particularly useful when dealing with high-dimensional datasets, where the number of variables or features is much larger than the number of observations. In these cases, traditional linear regression models may suffer from overfitting, where the model is too complex and fits the training data too closely, resulting in poor performance on new, unseen data. Lasso regression can help to address this problem by identifying the most important variables and reducing the complexity of the model.

Also Read: https://sparkbyexamples.com/machine-learning/linear-regression-with-examples/

1.1 Lasso Meaning

“Lasso” is a term that refers to a looped rope or cord used to capture or restrain animals, typically in a western or cowboy context. In the context of the Lasso regression model, the name refers to the way that the model “lassos” or restricts the coefficients of the linear regression equation to be smaller in absolute value. This restriction encourages the model to use fewer variables or features in the equation, resulting in a simpler, more interpretable model that is less likely to overfit the training data. The term “Lasso” was coined by Robert Tibshirani, who developed the Lasso regression model in 1996.

1.2 Mathematical equation of Lasso Regression

#The equation for Lasso regression can be expressed as:
minimize RSS + λ * ||β||₁
subject to ∑ᵢ |βᵢ| <= t

where:

RSS is the residual sum of squares (i.e., the sum of the squared differences between the predicted and actual values)

β is the vector of regression coefficients

λ is the regularization parameter, which controls the strength of the penalty on the absolute values of the coefficients

||.||₁ represents the L1 norm, which is simply the sum of the absolute values of the coefficients

t is the maximum allowed value for the sum of the absolute values of the coefficients.

1.3 Example of how to use lasso Regression in Python

Here is an example of how to use Lasso regression in Python:

Data Set Link: https://github.com/Narenderbeniwal/Spark-By-Example

# Import necessary libraries
import numpy as np
import pandas as pd
from sklearn.linear_model import Lasso
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_squared_error

# Load dataset
data = pd.read_csv(‘https://raw.githubusercontent.com/scikit-learn/scikit-learn/master/sklearn/datasets/data/boston_house_prices.csv’)

# Split data into predictors (X) and response variable (y)
X = data.drop(‘MEDV’, axis=1)
y = data[‘MEDV’]

# Split data into training and testing sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=0)

# Initialize Lasso regression model
lasso = Lasso(alpha=0.1)

# Fit the model on the training data
lasso.fit(X_train, y_train)

# Make predictions on the test data
y_pred = lasso.predict(X_test)

# Calculate the mean squared error
mse = mean_squared_error(y_test, y_pred)
print(“Mean Squared Error:”, mse)

# Print the coefficients of the model
coefficients = pd.DataFrame({‘Features’: X_train.columns, ‘Coefficients’: lasso.coef_})
print(coefficients)

In this example, we load the Boston Housing dataset from scikit-learn’s datasets module. We then split the data into predictors (X) and response variable (y) and split the data into training and testing sets.

Next, we initialize a Lasso regression model with a regularization parameter of 0.1 and fit it to the training data using the fit method. We then make predictions on the test data using the predict method and calculate the mean squared error using the mean_squared_error function from the sklearn.metrics module.

Finally, we print the coefficients of the model using a Pandas DataFrame, which shows us the relationship between each feature and the response variable.

##Output
Mean Squared Error: 27.08595667528009
Features Coefficients
0 CRIM -0.117790
1 ZN 0.044201
2 INDUS 0.001439
3 CHAS 2.439364
4 NOX -16.632210
5 RM 3.844710
6 AGE 0.011159
7 DIS -1.391767
8 RAD 0.243762
9 TAX -0.011868
10 PTRATIO -0.962174
11 B 0.009450
12 LSTAT -0.541900

As we can see from the output, Lasso regression has produced coefficients for each feature in the dataset. The coefficients can be used to understand the impact of each feature on the target variable, and also help in feature selection. In this case, we can see that some of the coefficients are zero, indicating that those features may not be important in predicting the target variable.

1.4 Role of alpha

To better understand the role of alpha, we plot the lasso coefficients as a function of alpha

import numpy as np
import matplotlib.pyplot as plt

alphas = np.linspace(0.01,500,100)
lasso = Lasso(max_iter=10000)
coefs = []

for a in alphas:
lasso.set_params(alpha=a)
lasso.fit(X_train, y_train)
coefs.append(lasso.coef_)

ax = plt.gca()

ax.plot(alphas, coefs)
ax.set_xscale(‘log’)
plt.axis(‘tight’)
plt.xlabel(‘alpha’)
plt.ylabel(‘Standardized Coefficients’)
plt.title(‘Lasso coefficients as a function of alpha’);

Lasso coefficients as a function of alpha

Remember that if alpha = 0, then the lasso gives the least squares fit, and when alpha becomes very large, the lasso gives the null model in which all coefficient estimates equal zero.

Moving from left to right in our plot, we observe that at first, the lasso models contain many predictors with high magnitudes of coefficient estimates. With increasing alpha, the coefficient estimates approximate towards zero.

2. Evaluation Metrics for Lasso Regression

Lasso regression is a type of linear regression that uses regularization to prevent overfitting and improve the model’s generalization performance. The evaluation metrics used for Lasso regression are similar to those used for linear regression, with the addition of metrics that evaluate the effectiveness of the regularization.

2.1 Mean Squared Error (MSE):

MSE measures the average squared difference between the predicted and actual values. A lower MSE indicates a better model fit.

2.2 Root Mean Squared Error (RMSE):

RMSE is the square root of the MSE, which gives a measure of the average magnitude of the error. A lower RMSE indicates a better model fit.

2.3 R-squared (R2):

R-squared measures the proportion of variance in the dependent variable that is explained by the independent variables. A higher R-squared indicates a better model fit.

2.4 Coefficient of Determination (Adjusted R2):

The adjusted R2 is similar to the R-squared, but it adjusts for the number of predictors in the model. This is important because adding more predictors can artificially increase the R-squared. A higher adjusted R-squared indicates a better model fit.

2.5 Regularization Parameter (α):

The regularization parameter is used to control the strength of the penalty on the model coefficients. A larger value of α results in a stronger penalty and a sparser model.

2.6 Regularization Parameter (α):

Non-Zero Coefficients: In Lasso regression, the regularization penalty can lead to some of the coefficients being shrunk to zero, resulting in a sparse model. The number of non-zero coefficients can be used to evaluate the effectiveness of the regularization and feature selection.

These evaluation metrics can be used to tune the model’s hyperparameters and assess its performance on a validation set. A good model should have low MSE and RMSE, high R-squared and adjusted R-squared, a suitable value of α, and a sparse model with few non-zero coefficients.

3. Advantages of LASSO regression

Lasso regression is a widely used linear regression technique that offers several advantages over traditional linear regression:

3.1 Feature Selection:

Lasso regression can be used for feature selection, where it identifies the most important predictors and eliminates the rest. This can be particularly useful when dealing with high-dimensional datasets, where the number of predictors is large and many of them may be irrelevant or redundant.

3.2 Regularization:

Lasso regression includes a regularization penalty in the objective function, which helps prevent overfitting and improve the model’s generalization performance. The penalty shrinks the coefficients towards zero, resulting in a simpler model that is less prone to overfitting.

3.3 Sparsity:

Lasso regression encourages sparsity in the model by shrinking some coefficients to exactly zero. This results in a model with fewer predictors, which is easier to interpret and can lead to better prediction accuracy.

3.4 Bias-Variance Tradeoff:

Lasso regression offers a tradeoff between bias and variance by controlling the strength of the regularization penalty. By increasing the penalty, Lasso can reduce the variance in the model at the cost of increased bias, or vice versa.

3.5 Interpretable:

Lasso regression produces a model that is easy to interpret, as it includes only a subset of the predictors and assigns zero coefficients to the irrelevant ones. This can help in understanding the relationships between the predictors and the target variable.

3.6 Versatility:

Lasso regression can be applied to a wide range of regression problems, including linear and non-linear regression, as well as generalized linear models. It is also compatible with different optimization algorithms and can handle both small and large datasets.

Overall, Lasso regression is a powerful and flexible technique that offers several advantages over traditional linear regression, making it a popular choice in data analysis and machine learning applications.

4. Disadvantages of LASSO Regression

While Lasso regression is a popular and useful regression technique, it also has some limitations and drawbacks that should be considered:

4.1 Feature Selection Bias:

The Lasso regularization penalty can result in some features being completely excluded from the model, even if they may be important predictors of the target variable. This can lead to biased feature selection, especially if the true relationship between the predictors and the target is not sparse.

4.2 Parameter Instability:

Lasso can be sensitive to small changes in the input data, resulting in high variance in the estimated coefficients. This can lead to instability in the model parameters, which can make it difficult to interpret the results.

4.3 Bias-Variance Tradeoff:

The Lasso regularization penalty can reduce the variance in the model by shrinking the coefficients towards zero, but it can also introduce bias by underestimating the true coefficients. The optimal tradeoff between bias and variance depends on the specific problem and dataset.

4.4 Overfitting:

While Lasso is designed to prevent overfitting by introducing a penalty term, it is still possible for the model to overfit the training data if the regularization parameter is not properly tuned. This can lead to poor generalization performance on new data.

4.5 Selection of the regularization parameter:

The choice of the regularization parameter α is crucial for the performance of the Lasso model. However, there is no analytical solution for finding the optimal value of α, and it must be determined empirically by cross-validation. This can be time-consuming and may not always lead to the best choice of α.

4.6 Multicollinearity:

Lasso can be sensitive to multicollinearity, which is when two or more predictors are highly correlated. In this case, Lasso may select one of the correlated predictors and exclude the other, even if both are important for predicting the target variable.

Overall, while Lasso regression is a useful technique for regularization and feature selection, it is important to carefully consider its limitations and potential drawbacks when applying it to a specific problem or dataset.

5. When to use LASSO Regression?

Lasso regression is particularly useful when dealing with high-dimensional datasets, where the number of predictors (features) is large and many of them may be irrelevant or redundant. In such cases, traditional linear regression techniques may overfit the data and fail to generalize well to new data.

Lasso regression can help in reducing the number of predictors and selecting the most important ones, thereby improving the model’s accuracy and interpretability. This is particularly useful in situations where the focus is on understanding the relationships between the predictors and the target variable, rather than simply predicting the outcome.

6. When not to use LASSO Regression?

While Lasso regression can be a powerful technique for feature selection and regularization in linear regression models, it may not always be the best choice for every situation. Here are some situations where Lasso regression may not be the best approach:

Small sample sizes

Nonlinear relationships

Correlated predictors

Categorical predictors

Outliers

7. Conclusion

In conclusion, Lasso regression is a powerful and versatile technique for feature selection and regularization in linear regression models. It offers several advantages over traditional linear regression, including feature selection, regularization, sparsity, bias-variance tradeoff, interpretability, and versatility. Lasso regression is particularly useful when dealing with high-dimensional datasets, where the number of predictors is large and many of them may be irrelevant or redundant.

 In this blog, we will see the techniques used to overcome overfitting for a lasso regression model. Regularization is one of the methods widely used to make your model more generalized. 1. What is Lasso Regression? Lasso regression (short for “Least Absolute Shrinkage and Selection Operator”) is a type of linear regression that is used for feature  Read More Machine Learning, Uncategorized 

Leave a Reply

Your email address will not be published. Required fields are marked *