Below are key changes with upcoming v0.17 release.
It’s yet to be registered in General, and for now available through main branch, so changes could still be brought if some design choices appear problematic.
The general objective with the API changes was to get improved alignement with the MLJ
(and potentially LearnAPI
) scope for model constructor/learner, and fit
method. Also, improve consistency with NeuroTreeModels
Breaking changes:
Model constructors (EvoTreeRegressor
, EvoTreeClassifier
…) now include the following arguments:
metric
: the evaluation metric to be trackedearly_stopping_rounds
device
: either:cpu
or:gpu
Example:
config = EvoTreeRegressor(; loss=:mse, metric=:mae, early_stopping_rounds=10, device=:gpu)
Deprecation of fit_evotree
in favor of import of MLJModelInterface fit
:
Note that fit_evotree
results in a call to fit
.
The following legacy kwargs of fit_evotree
will be ignored:
metric
return_logger
early_stopping_rounds
device
m = fit_evotree(config, dtrain; target_name="y", feature_names=["x1", "x2"]) #old
m = fit(config, dtrain; target_name="y", feature_names=["x1", "x2"]) #new
Changes in the naming of variables identity in the Tables / DataFrames based internal API, which were previously kwargs of fit_evotree
:
fnames
=>feature_names
w_name
=>weight_names
m = fit_evotree(config, dtrain; target_name="y", feature_names=["x1", "x2"])
The logger
, which tracks metrics on eval data through the iterations, is now automatically included in a fitted model info
field
m = fit(config, dtrain; target_name="y", feature_names=["x1", "x2"], deval)
logger = m.info[:logger]
Changes related to losses:
L1
/l1
loss is no longer supported. Useloss=:mae
inEvoTreeRegressor
instead.
Constructors are not longer parametrics: EvoTreeRegressor{L<:ModelType} => EvoTreeRegressor
This one shouldn’t affect user’s experience.
Fixes and improvements to GPU:
- Models trained through MLJ now support the :gpu argument (passed through the constructor like
EvoTreeRegressor
as showned above). - Inference is now properly dispatch to :gpu when using:
m(dtrain; device=:gpu)
- Both
:mae
and:quantile
losses are now now suported on GPU (device = :gpu
)