Is there any simple example code for optimizing neural network (e.g. from Flux.jl) parameters using non-gradient algorithms such as simulated annealing and genetic algorithms?
Here’s an unconventional one
If you use Lux with Optimization.jl
then just change to a gradient-free algorithm, like the ones in
1 Like