I have a composite model that has a parameter (alpha) that generates a probability distribution and then the features and samples are weighted according to this distribution. I also want the samples to be taken according to the parameter, and do all of this with a self-tuning model.
I’ve implemented this as a learning network as in this example in the MLJ docs. The structure is:
X,y → transformers → regressor → invert transform
The composite model has a parameter alpha to generate the weights for the transform. For the self-tuning, I also have a ResamplingStrategy depending on the same alpha. I’ve tried embedding the resampler as a field in the model and pass it to TunedModel as:
#LS <: ResamplingStrategy
r = range(lkrr_model, :(LS.alpha), lower=1, upper=10, scale=:log);
self_tuning_regressor = TunedModel(model=lkrr_model,
tuning=Grid(resolution=5),
resampling=lkrr_model.LS,
repeats=2,
range=r,
measure=rms);
tuned_kregressor = machine(self_tuning_regressor,K,y)
MLJ.fit!(tuned_kregressor)
However, every time alpha is changed, this is not reflected on the resampler. I guess this is because the model tuning function evaluates a clone of the model with the changed parameter. I can alter the resampler’s parameters at each call of train_test_pairs
and the values are saved just fine, but I want it to be linked to the model so that I can autotune.
I have also tried setting the probabilities as weights and passing them around with a machine. But still, when calling train_test_pairs(ls, rows, X, y, w
), the X, y and w are the ones associated with the self-tuning machine (tuned_kregressor
), prior to any transformation. If I could access the data after the transformation then it would be ok.
How do I auto tune the resampler, or simply access the model’s fields from the resampler?