Hello Folks:
Code Snippets obtained from Alan Turing Institute @ablaom
Using Pluto.jl, Windows, Julia v 1.6.x
I am using EvoTrees.jl Evaluation
First I am loading and Instantiating the Gradient Tree
Boosting Model
Booster = @load EvoTreeRegressor
Second, I am wrapping the model to make it self-repeating
itr_boost = InteratedModel(model=Booster, resampling=Holdout(fraction_train=0.8)...)
Then I combine with the categorical feature encoder
pipe = @pipeline Continuous Encoder itr_boost
After, I define the hyperparameter range
max_depth_range = MLJIteration.range(pipe, :deterministic_iterated_model.model.max.depth),
lower = 1,
upper = 30)
I wrap the model in a optimization container (for self-tuning)
self_tuning_pipe = MLJ.TunedModel(model=pipe,
tuning = RandomSearch(),
ranges = max_depth_range,
resampling=CV(nfolds=3, rng=456),
measure=l1,
acceleration=CPUThreads(),
n=5)
I bind the data with
X, y = @load_reduced_ames
Wrap self_tuning pipe in a machine
EVO = machine(self_tuning_pipe, X, y)
Lastly, when I attempt to evaluate wrapped self-tuning pipe
MLJ.evaluate!(EVO,
measures=[l1, l2],
resampling=CV(nfolds=5, rng=123),
acceleration=CPUThreads(),
verbosity=2)
I am returning the following error:
Distributed.ProcessExitedException(10)
I have attempted to decrease the folds and
make other parameter adjustments, but am
experiencing the same issue. The processes
exit and my Pluto session terminates.
Is this a parameter issue or a performance
issue? Or maybe something else? How
would you approach a solution to this?
Thanks,