Cheers. I’m using FluxTraining.jl. I wonder if there is a kind of warmup option, which would allow training of n epochs without early stopping. If such an option is not implemented, any hint on how to do it at user code level? Thanks in advance.
Yes. You should be able to use the EarlyStopping(criteria...; kwargs...)
constructor (ref. https://fluxml.ai/FluxTraining.jl/dev/references/FluxTraining.EarlyStopping?id=sourcefiles/FluxTraining/src/callbacks/earlystopping.jl&focus=1) with the Warmup
criterion from GitHub - JuliaAI/EarlyStopping.jl: Early stopping criteria for loss-generating iterative algorithms.
1 Like