Analyzing model capacity

When talking about model capacity, it usually boils down to 2 cases: underfitting and overfitting. It is closely related to the bias and variance trade-off: an underfitting model has low bias and high variance, thus highly flexible, or too general; in the meantime, an overfitting one has very low bias and high variance, or too detailed.

In their workshop on PyCon 2015 Montréal, Andreas Mueller and Kyle Kastner had showed us one method to analyze the model capacity: compare the learning curve and the validation curve:

When the learning curves have converged to a poor error, we have an underfitting model.

When the learning curves have not yet converged with our full training set, it indicates an overfit model.

If our algorithm is underfitting, the following actions might help:

  • Add more features. It may be helpful to make use of information as additional features. For example, to predict housing prices features such as the neighborhood the house is in, the year the house was built, the size of the lot, etc. can help the model by giving new dimensions to help differentiate houses. Adding these features to the training and test sets can improve the fit.
  • Use a more sophisticated model. Adding complexity to the model can help improve the fit. For a SVR fit, this can be accomplished by increasing the kernel complexity (generally linear << poly << rbf). Each learning technique has its own methods of adding complexity.
  • Use fewer samples. Though this will not improve the classification, an underfitting algorithm can attain nearly the same error with a smaller training sample. For algorithms which are computationally expensive, reducing the training sample size can lead to very large improvements in speed.
  • Decrease regularization. Regularization is a technique used to impose simplicity in some machine learning models, by adding a penalty term that depends on the characteristics of the parameters. If a model is underfitting, decreasing the regularization can lead to better results.

If our algorithm shows signs of overfitting, the following actions might help:

  • Use fewer features. Using a feature selection technique may be useful, and decrease the overfitting of the estimator.
  • Use a simpler model. Model complexity and overfitting go hand-in-hand. For example, models like random forests tend to overfit much more than linear models and SVMs.
  • Use more training samples. Adding training samples can reduce the effect of overfitting.
  • Increase Regularization. Regularization is designed to prevent overfitting. So increasing regularization can lead to better results for overfitting models.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s