Starting with v0.5.0, it’s now possible to build GBT models on GPU, thanks to the underlying CUDA.jl package providing nimble tools to handle kernels. Speedups are modest compared to those observed with XGBoost’s gpu_hist approach as there likely are remaining optimizations along the CPU-GPU traffic or more efficient kernels.
A model can be trained on gpu using:
fit_evotree_gpu(params1, X, Y) instead of the usual
fit_evotree(params1, X, Y).