Overfitting is a problem with sophisticated non-linear learning algorithms like gradient boosting.
Here you will discover how you can use early stopping to limit overfitting with XGBoost in Python.
By the end you will know:
- About early stopping as an approach to reducing overfitting of training data?
- How to monitor the performance of an XGBoost model during training and plot the learning curve?
- How to use early stopping to prematurely stop the training of an XGBoost model at an optimal epoch?
Source Code
Report
Next: Tune Multithreading Support for XGBoost