- I am trying to apply great features
Flux
for likelihood minimization problem.
Using ML language, losses for all training data points are summed up,
there is just a functionLLH()
to be minimized. Is there an easy hack totrain!
functions that can help me?
using Flux
using Flux.Tracker
#
const data = [randn() for _ in 1:1000]
μ = param(0.5)
σ = param(1.1)
model(x) = 1/(sqrt(2π)*σ)*exp(-(x-μ)^2/(2*σ^2))
LLH() = sum(-log.(model.(data)))
# grad also works
gs = Tracker.gradient(LLH, Params([μ,σ]))
gs[μ], gs[σ]
# now I need to have an infinite training loop until convergence
- I like very much
NLopt
minimizer withLD_LBFGS
algorithm.
Any suggestions on how to interface to it? (customaryapply
function?)
Many thanks.