Sure - ValueShapes has direct support for constant/non-constant parameters:
using ValueShapes, Parameters, LinearAlgebra, Optim
vs = NamedTupleShape(
a = ScalarShape{Real}(),
b = ConstValueShape([1,2,3,4]),
c = ArrayShape{Real}(3)
)
function f(v::NamedTuple)
@unpack a, b, c = v
(a-2)^2 + norm(b.-3)^2 + norm(c.-4)^2
end
# Some tests:
totalndof(vs) == 4
x_guess = zeros(totalndof(vs))
v_guess = vs(x_guess)[]
(vs >> f)(x_guess)
optresult = let vs = vs
Optim.optimize(vs >> f, x_guess, LBFGS(), autodiff=:forward)
end
vs(Optim.minimizer(optresult))[] == (
a = 2,
b = [1, 2, 3, 4],
c = [4, 4, 4]
)
So this is using ForwardDiff, which only supports vectors, to optimze a function defined on NamedTuple
s, keeping parameter b
constant (which also reduces the dimensionality of the problem, the flat real vectors x
have length 4).
If you want random starting points, and have a prior distribution for the components of v
, using NamedTupleDist
makes this very natural:
using ValueShapes, Parameters, LinearAlgebra, Distributions, Optim
prior = NamedTupleDist(
a = Normal(),
b = ConstValueShape([1,2,3,4]),
c = MvNormal(float(Diagonal([2,4,3])))
)
vs = varshape(prior)
function f(v::NamedTuple)
@unpack a, b, c = v
(a-2)^2 + norm(b.-3)^2 + norm(c.-4)^2
end
# Some tests:
totalndof(vs) == 4
x_guess = rand(unshaped(prior))
(vs >> f)(x_guess)
v_guess = rand(prior)
(f)(v_guess)
optresult = let vs = vs
Optim.optimize(vs >> f, x_guess, LBFGS(), autodiff=:forward)
end
I plan to replace vs >> f
by unshaped(f, vs)
, it’ll be clearer.
BAT.jl is able to do variate transformations using such priors, so that the optimizer can run in an infinite space even if parameters have bounds (as specified by their prior distributions). I’m planning to spin that off as a separate package sometime soonish.