Hi, there are good examples to convert NamedTupes to Dict and vice versa etc. But in e.g. Optim’s minimize, I would like to go from a single vector to a NamedTuple.I was wondering if this case has a proven solution?
I came up with:
function vec2nt(v::Vector, nt::T) where {T <: NamedTuple}
pv = Vector{UnitRange{Int64}}(undef, length(nt))
indx = 0; start = 1
for key in keys(nt)
indx += 1
pv[indx] = start:start-1+length(nt[key])
start = start+length(nt[key])
end
println(pv)
t = [v[pv[i]] for i in 1:length(pv)]
nnt =NamedTuple{keys(nt), typeof(values(nt))}(t)
end
v = [6,7,8,9,10, 11, 12]
nt = (a=[1, 2], b=[3,4,5], c=[6, 7])
vec2nt(v, nt)
Tamas, you probably recognized the context of my question. I’m looking into ways of passing the problem - p(theta) - to Optim’s minimize to compute the maximum_a_posterior estimate. Slowly I’m unraveling the steps in the DynamicHMC ‘recipe’ I seem to be using over and over. And understanding the ‘as’ statement is on my list. Very neat.
It’s probably not the ideal interface. I got really carried away with the possibility of nested transformations (eg a vector of NamedTuples of heterogeneous elements), but now I am of the opinion that in practice, a NamedTupleof Arrays and scalars would do just fine (basically isomorphic to what is handled by, say, Stan). One would just specify the dimensions, eg
that would just transform 13-element vectors to NamedTuples like above.
Also, I plan to separate the vector deconstruction and the actual nonlinear transformation interface, and add SArrays etc. But I need to think more about this. Suggestions welcome!
I noticed that when I try to use p(theta) in Optim.minimize indeed I have to play tricks with σ (i.e. square s to prevent it to go below 0). E.g. in m12.6d1.jl.
But that’s probably what you mean by your last sentence above. The transformation component can be directly specified in
P = TransformedLogDensity(problem_transformation(p), p)
There may be other reasons to use an intermediate function, but in simple cases, couldn’t it be a direct as()?