One of the most used tools in machine learning, statistics and applied mathematics in general is the regression tool. I say the regression, but there are lots of regression models and the one I will try to cover here is the well known generalized linear regression. The idea underneath this complex word is quite simple: the observation we try to fit is supposed to be a linear combination of input explanatory variables. Mathematically, if the observation vector is called $latex \mathbf{\widehat{y}}$ this can be formulated by

where the $latex\mathbf{x_k}$ stands for the explanatory vector *k*, and the $latex w_k$ are the weight of each explanatory variable. Furthermore, as we usually need for an intercept, by convention we set $latex x_0=1$ so that the weight $latex w_0$ represents the intercept.

# Linear Regression

As a first example, let's begin by a simple linear regression which consists of minimizing the sum of error squares. The error is defined as the difference between the expected true value and the predicted value obtained by our model. Mathematically, it expresses

To illustrate this simple example, let's use the awesome library scikit-learn and especially the package `sklearn.linear_model`

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 |
# General import numpy as np import pandas as pd # Charting from bokeh.plotting import figure from bokeh.models import ColumnDataSource from bokeh.io import output_notebook # Regression from sklearn.linear_model import LinearRegression # For this example, I use output to notebook as I use Jupyter output_notebook() # Define data x_1 = np.linspace(0,10,11) x_0 = np.ones(len(x_1)) y = 4*x+1+np.random.normal(0, 5, len(x_1)) data = pd.DataFrame(np.column_stack([x_0,x_1,y]),columns=['x_0','x_1','y']) # Define the model by explicitely saying that our data contains an intercept term reg = LinearRegression(fit_intercept=false) # Fit the data reg.fit(data[['x_0','x_1']], data['y']) # Evaluate the model on the given x y_pred = reg.predict(data[['x_0','x_1']]) # Plot comparison p = figure() p.circle(x_1, y, legend='observation') p.line(x_1, y_pred, legend='model', color='red') p.show(p) |

The model we use here is quite simple, it is just a line. The model seems quite good with fitted coefficients of $\latex w_0=-0.87998$ and $\latex w_1=4.54914$, but the error is not null (mean squared error = 15.57 in my example). In general in machine learning, a way to reduce the residual error is to change the model by a slightly more complex one. In our case, we simply fitted a polynom of degree 1. What if we increase the polynom degree ? For example, let's say we increase the degree up to 12:

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 |
np.random.seed(10) x_1=np.linspace(0,10,11) x_0=np.ones(len(x_1)) y=4*x+1+np.random.normal(0, 5, len(x_1)) data = pd.DataFrame(np.column_stack([x_0,x_1,y]),columns=['x_0','x_1','y']) predictors=['x_0','x_1'] predictors.extend(['x_{}'.format(i) for i in range(2,13)]) for i in range(2,13): data['x_{}'.format(i)] = data['x_1']**i col = ['rss','intercept'] + ['coef_x_{}'.format(i) for i in range(1, 16)] ind = ['alpha_%.2g'%alpha for alpha in alpha_lasso] coef_matrix_lasso = pd.DataFrame(index=ind, columns=col) graphs = [] models = [] idx = 0 for degree in [1, 2, 4, 6, 8, 10]: X = data[predictors[0:degree+1]] reg = LinearRegression(fit_intercept=False) reg.fit(X, data['y']) p=figure(width=300, height=300) p.circle(x,y,legend='obseration') y_pred = reg.predict(X) p.line(x,y_pred, legend='model', color='red') p.xaxis.axis_label='x' p.yaxis.axis_label='y' p.legend.location = "top_left" p.title.text='degree {}'.format(degree) rss = sum((y_pred-data['y'])**2) res = [rss] res.extend([model.intercept_]) res.extend(model.coef_) coef_matrix_lasso.iloc[idx,] = res graphs.append(p) models.append(reg) idx += 1 grid = gridplot([[graphs[0], graphs[1]], [graphs[2], graphs[3]], [graphs[4], graphs[5]]]) show(grid); |

As we can see, the more we increase the polynomial degree of the model, the more we reduce the residual error. However, and it is particularly evident in this example, the reductions of the error is not necessarily a sign of a better model. Indeed, imagine we use a high degree polynom as our model, the error tends to be null (and it is actually here as we have a polynom degree equal to the number of observations). But if we add an extra observation, our model surely experiences a high residual error.

1 2 3 4 5 6 7 8 9 |
new_x = 15 data_x = np.array([new_x**i for i in range(0,13)]) new_y = 4*new_x+1+np.random.normal(0, 5, 1) idx = 0 for degree in [1, 2, 4, 6, 8, 10]: y_pred = models[idx].predict(data_x[0:degree+1].reshape(1,-1)) rss = (y_pred-new_y)**2 idx += 1 print "The residual error for X={} is {}".format(new_x, rss) |

and the result is

1 2 3 4 5 6 |
The residual error for X=15 is [ 57.50465854] The residual error for X=15 is [ 540.59208041] The residual error for X=15 is [ 23711.11027247] The residual error for X=15 is [ 192880.65211092] The residual error for X=15 is [ 3.55927302e+09] The residual error for X=15 is [ 4.83622098e+12] |

As we can see in this example, adding an observation at x=15 leads to increasing error with the polynom degree. This behavior is known as overfitting, i.e. the model fit very well the data but tends to poorly perform on new data. We say that is suffers of great variance.

To overcome this problem, we need to choose the right polynom degree. We could for example split our dataset into two parts, a train set and a test set. Then, the best model would be the one with the least residual error on the test set. However, there a clever method to limit the overfitting phenomenon: regularization.

# Regularization

Regularization consists of adding an penalty to a model, with the goal of preventing overfitting. It comes from the constatation that when the degree of the polynom increases (to take our first example), the weights of each monom also increases. Therefore, to overcome overfitting, we penalize monoms with high weight. The minimization function now becomes

where $latex \left\|.\right\|$ is typically L1 or L2 norm, and $latex \alpha$ is an hyper-parameter that can be tunable to adjust the penality sensibility (0 means no penalty, i.e. unregularized model). The two widely used regularization methods are L1 and L2 regularization, also called lasso and ridge regression.

# Lasso

To cite content of the scikit-learn library

Lasso is useful in some contexts due to its tendency to prefer solutions with fewer parameter values, effectively reducing the number of variables upon which the given solution is dependent. For this reason, the Lasso and its variants are fundamental to the field of compressed sensing. Under certain conditions, it can recover the exact set of non-zero weights.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 |
# General import numpy as np import pandas as pd # Charting from bokeh.charts import show, Scatter, Line from bokeh.models.glyphs import Line as Line_glyph from bokeh.plotting import figure from bokeh.models import ColumnDataSource from bokeh.layouts import gridplot from bokeh.io import output_notebook # Regression from sklearn.linear_model import Lasso, LassoCV, LinearRegression output_notebook() def generate_plot(data, title=None): source = ColumnDataSource(data=data) p = Scatter(data, x='x', y='y', xlabel='x', ylabel='y', width=400, height=400, title=title) line = Line_glyph(x='x', y='y_pred', line_color='blue') p.add_glyph(source, line) return p def lasso_regression(data, predictors, alpha=None): # Fit the model if alpha is not None and predictors is not None: model = Lasso(alpha=alpha, normalize=True, max_iter=1e8) else: model = LassoCV(eps=0.001, n_alphas=10, fit_intercept=True, normalize=True, precompute='auto', max_iter=1e8, tol=0.0001, copy_X=True, cv=10, verbose=False, n_jobs=8, positive=False, random_state=None) model.fit(data[predictors],data['y']) y_pred = model.predict(data[predictors]) # Plot the results data_pred = data data_pred['y_pred'] = y_pred if alpha is not None: title = '{:.2e}'.format(alpha) else: title = None graph = generate_plot(data_pred, title=title) return graph, model # Build data np.random.seed(100) x = np.array([i*np.pi/180 for i in range(0,240,4)]) y = np.cos(x) + np.random.normal(0,0.15,len(x)) y_pred = np.cos(x) data = pd.DataFrame(np.column_stack([x,y,y_pred]),columns=['x','y','y_pred']) for i in range(2,16): data['x_{}'.format(i)] = data['x']**i # Initialize predictors to all 15 powers of x predictors=['x'] predictors.extend(['x_{}'.format(i) for i in range(2,16)]) # Define the alpha values to test alpha_lasso = [1e-8, 1e-5, 1e-4, 1e-3, 1e-2, 1] # Iterate over the 10 alpha values: graphs = [] for i in range(6): graph, _ = lasso_regression(data, predictors, alpha_lasso[i]) graphs.append(graph) grid = gridplot([[graphs[0], graphs[1]], [graphs[2], graphs[3]], [graphs[4], graphs[5]]]) show(grid); |

We clearly see the effect of regularization. When we increase the penalty, we strongly limit the weights of each coefficient, up to only keep the intercept. But how to choose the right parameter ? Well, here again, we need to look at the residual error computed on a set which is not the training set. Such a method is known as validation. The principle is to split the data set into three parts, say 80% for the training set, 10% for validation and 10% for test. The model is trained on the training set, then the validation set is used to choose the hyper-parameter. Finally, the test set is used to estimate the true error of the model. However, on small data sets, this approach is not efficient enough as it limits the amount of data available for training. For such small data sets, we can apply the method of cross-validation. For that, we split the data set into two parts, one for the training and the other for the test. The training is then performed on all the training set but some k samples. So let's imagine that the training set is composed of N samples. We perform N/k regressions on the N-k samples and we compute the validation error on the k remaining samples. After all those regressions, the validation error is the mean error of all the validation errors.

In the scikit-learn library, there is a class that implement this approach and finds the optimal hyper-parameter: LassoCV. We then re-use the preceeding code sample omitting the `alpha`

parameter to force the use of the `LassoCV`

model :

1 2 3 |
graph, model = lasso_regression(data, predictors) graph.title.text = 'alpha: {:.2e}'.format(model.alpha_) show(graph); |

According to the LassoCV model, the optimal hyper-parameter $latex \alpha=4.16e^{-4}$.

# Ridge Regression

The ridge regression is quite similar to lasso, and differs only by the order of the norm used in the regularization term. In lasso, we used a norm of order 1 (L1) and in the ridge regression we use a norm of order 2 (L2). The behavior of this regularization technique is that all resulting weights of the models are still non null, but eventually with very small value so that their influence on the predicted value is quite low. On the opposite, lasso imposes sparsity of the model by eventually setting weights to null which make the model interpretation easier. The main advantage of the ridge regression is that it is indifferent to multiplicative factor, and tends to equals weights of highly-correlated variables whereas lasso will choose or the other.

The implementation of our example is really similar to the preceeding with lasso. All we have to do is replacing `Lasso`

and `LassoCV`

by `Ridge`

and `RidgeCV`

!

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 |
# General import numpy as np import pandas as pd # Charting from bokeh.charts import show, Scatter, Line from bokeh.models.glyphs import Line as Line_glyph from bokeh.plotting import figure from bokeh.models import ColumnDataSource from bokeh.layouts import gridplot from bokeh.io import output_notebook # Regression from sklearn.linear_model import Ridge, RidgeCV, LinearRegression output_notebook() def generate_plot(data, title=None): source = ColumnDataSource(data=data) p = Scatter(data, x='x', y='y', xlabel='x', ylabel='y', width=400, height=400, title=title) line = Line_glyph(x='x', y='y_pred', line_color='blue') p.add_glyph(source, line) return p def ridge_regression(data, predictors, alpha=None): # Fit the model if alpha is not None and predictors is not None: model = Ridge(alpha=alpha, normalize=True, max_iter=1e8) else: model = RidgeCV(alphas=np.linspace(1e-4, 1e-3, 1000), fit_intercept=True, normalize=True) model.fit(data[predictors],data['y']) y_pred = model.predict(data[predictors]) # Plot the results data_pred = data data_pred['y_pred'] = y_pred if alpha is not None: title = '{:.2e}'.format(alpha) else: title = None graph = generate_plot(data_pred, title=title) return graph, model # Build data np.random.seed(100) x = np.array([i*np.pi/180 for i in range(0,240,4)]) y = np.cos(x) + np.random.normal(0,0.15,len(x)) y_pred = np.cos(x) data = pd.DataFrame(np.column_stack([x,y,y_pred]),columns=['x','y','y_pred']) for i in range(2,16): data['x_{}'.format(i)] = data['x']**i # Initialize predictors to all 15 powers of x predictors=['x'] predictors.extend(['x_{}'.format(i) for i in range(2,16)]) # Define the alpha values to test alpha_ridge = [1e-8, 1e-5, 1e-4, 1e-3, 1e-2, 1] # Iterate over the 10 alpha values: graphs = [] for i in range(6): graph, _ = ridge_regression(data, predictors, alpha_ridge[i]) graphs.append(graph) grid = gridplot([[graphs[0], graphs[1]], [graphs[2], graphs[3]], [graphs[4], graphs[5]]]) show(grid); |

And with the use of cross-validation:

As we can see, the resulting model is a little different as with the lasso regularization. And if we now look at the weight coefficients:

Degree | 1 | 2 | 3 | 4 | 5 |
---|---|---|---|---|---|

L1 | -0.569 | -0.101 | 0 | 0 | 0 |

L2 | -0.292e-01 | -0.225e-01 | -0.011e-02 | 4.006e-03 | 1.526e-03 |

6 | 7 | 8 | 9 | 10 |
---|---|---|---|---|

5.813e-04 | 0 | 0 | 0 | 0 |

3.389e-04 | 5.571e-05 | 6.083e-06 | -6.798e-08 | -2.72e-07 |

10 | 11 | 12 | 13 | 14 | 15 |
---|---|---|---|---|---|

0 | 0 | 0 | 0 | 0 | -2.725e-10 |

-2.72e-07 | -9.600e-08 | -2.353e-08 | -4.425e-09 | -5.443e-10 | 2.311e-11 |

# Conclusion

In this article, we saw two regularization techniques and the importance to ALWAYS use regularization when we try to fit a model. We also saw that noth techniques, although quite similar, give very different results. Indeed, whereas the ridge technique includes all the explanatory variables, lasso results in a sparse model which is often easier to understand. However, lasso performs less well in the case of highly correlated variables as it tends to produce high sparsity in the result. And that's exactly what we saw in our example when we tried to fit a cosinus with a polynom. Each variable are highly correlated so that the resulting model has a lot of zero values as weight coefficients. Along with Ridge and Lasso, Elastic Net is another useful techniques which combines both L1 and L2 regularization. It allows for learning a sparse model while it also keep ridge properties.

# End note

We briefly evocate the subject, but as always in machine learning, we need several datasets to train the model, validate it and test it. We discussed that a little in this post and we quickly saw how to deal with small datasets with cross-validation. This could be the subject of a next post.