Freeze model parameters with FluxTraining.jl

Cheers,

Training a model with FluxTraining.jl requires building up a learner where the optimizer is declared. For instance:

opt = Flux.Adam(eta)
learner = Learner(model, lossfn, optimizer=opt)
epoch!(learner, TrainingPhase(), trainset)

Given that the optimizer state is not part of the arguments, as opposite to what happens with the new syntax of Flux.train!, I wonder if freezing model parameters as below would work:

opt = Flux.Adam(eta)  # opt is argument for Learner
opt_state = Flux.setup(opt, model)   # opt_state is not argument for Learner
Flux.freeze!(opt_state.encoder)
learner = Learner(model, lossfn, optimizer=opt)
epoch!(learner, TrainingPhase(), trainset)

Thanks in advance for advise/clarification.

1 Like