The XGBoost library for gradient boosting is designed for efficient multi-core parallel processing.
This allows it to efficiently use all of the CPU cores in your system when training.
Here you will discover the parallel processing capabilities of the XGBoost.
By the end you will know:
Next: Tune the Number and Size of Decision Trees with XGBoost