Vector to NamedTuple?

Hi, there are good examples to convert NamedTupes to Dict and vice versa etc. But in e.g. Optim’s minimize, I would like to go from a single vector to a NamedTuple.I was wondering if this case has a proven solution?

I came up with:

``````function vec2nt(v::Vector, nt::T) where {T <: NamedTuple}
pv = Vector{UnitRange{Int64}}(undef, length(nt))
indx = 0; start = 1
for key in keys(nt)
indx += 1
pv[indx] = start:start-1+length(nt[key])
start = start+length(nt[key])
end
println(pv)
t = [v[pv[i]] for i in 1:length(pv)]
nnt =NamedTuple{keys(nt), typeof(values(nt))}(t)
end

v = [6,7,8,9,10, 11, 12]
nt = (a=[1, 2], b=[3,4,5], c=[6, 7])

vec2nt(v, nt)
``````

``````julia> v = [6,7,8,9,10,11,12];

julia> nt = (a=[1,2], b=[3,4,5], c=[6,7]);

julia> NamedTuple{keys(nt)}(map.(i -> v[i], values(nt)))
(a = [6, 7], b = [8, 9, 10], c = [11, 12])
``````

Which has the added benefit of supporting non-consecutive and repeated indexes:

``````julia> nt = (a=[1,7], b=[5,4,3], c=[1,2,1,2]);

julia> NamedTuple{keys(nt)}(map.(i -> v[i], values(nt)))
(a = [6, 12], b = [10, 9, 8], c = [6, 7, 6, 7])
``````
2 Likes

Thank you. Close, but I should have provided more realistic inputs:

``````nt = (β = [1.0, 0.25], α = rand(10), s = [0.2])
v = (γ = vcat([1.0, 0.25], rand(10),  [0.2]))
NamedTuple{keys(nt)}(map.(i -> v[i], values(nt)))
``````

Your example does point to another solution though (using a NamedTuple with indices)!

``````julia> using TransformVariables

julia> trans = as((β = as(Array, 2), α = as(Array, 10), s = as(Array, 1)));

julia> nt = (β = [1.0, 0.25], α = rand(10), s = [0.2])
(β = [1.0, 0.25], α = [0.158498, 0.171848, 0.846386, 0.474491, 0.0255008, 0.681615, 0.695278, 0.704238, 0.895584, 0.368178], s = [0.2])

julia> y = inverse(trans, nt)
13-element Array{Float64,1}:
1.0
0.25
0.15849757451847424
0.17184825252678304
0.8463861675797599
0.4744913375760429
0.025500775432728107
0.681615427076824
0.695277914515098
0.7042375146704107
0.8955843266165946
0.3681783642201166
0.2

julia> trans(y)
(β = [1.0, 0.25], α = [0.158498, 0.171848, 0.846386, 0.474491, 0.0255008, 0.681615, 0.695278, 0.704238, 0.895584, 0.368178], s = [0.2])
``````
1 Like

You can use `foldl`:

``````julia> nt = (β = [1.0, 0.25], α = rand(10), s = [0.2])
(β = [1.0, 0.25], α = [0.750847, 0.722269, 0.117531, 0.690413, 0.804141, 0.539423, 0.150192, 0.705363, 0.975111, 0.365696], s = [0.2])

julia> (v = vcat([1.0, 0.25], rand(10),  [0.2]))'
1.0  0.25  0.372726  0.534158  0.408041  0.859988  0.099716  0.175659  0.459855  0.0935148  0.377641  0.924534  0.2

julia> NamedTuple{keys(nt)}(foldl(((a,i),b) -> (k=i+length(b);
((a...,v[i:k-1]), k)), values(nt); init=((),1))[1])
(β = [1.0, 0.25], α = [0.372726, 0.534158, 0.408041, 0.859988, 0.099716, 0.175659, 0.459855, 0.0935148, 0.377641, 0.924534], s = [0.2])
``````

Thank you Max and Tamas, both are nice solutions!

Tamas, you probably recognized the context of my question. I’m looking into ways of passing the problem - p(theta) - to Optim’s minimize to compute the maximum_a_posterior estimate. Slowly I’m unraveling the steps in the DynamicHMC ‘recipe’ I seem to be using over and over. And understanding the ‘as’ statement is on my list. Very neat.

It’s probably not the ideal interface. I got really carried away with the possibility of nested transformations (eg a vector of `NamedTuples` of heterogeneous elements), but now I am of the opinion that in practice, a `NamedTuple`of `Array`s and scalars would do just fine (basically isomorphic to what is handled by, say, Stan). One would just specify the dimensions, eg

``````make_transformation((β = 2, α = 10, s = 1)) # hypothetical
``````

that would just transform 13-element vectors to `NamedTuple`s like above.

Also, I plan to separate the vector deconstruction and the actual nonlinear transformation interface, and add `SArray`s etc. But I need to think more about this. Suggestions welcome!

Thank you Tamas! Not sure I can come up with actionable suggestions, but maybe it helps to explain what I found difficult to grasp. In:

``````problem_transformation(p::m_12_06d_model) =
as( (β = as(Array, size(p.X, 2)), α = as(Array, p.N_societies), σ = asℝ₊) )
``````

the σ component is a domain prescription while in above usage

``````make_transformation((β = 2, α = 10, s = 1)) # hypothetical
``````

it is more like a direct mapping.

I noticed that when I try to use p(theta) in Optim.minimize indeed I have to play tricks with σ (i.e. square s to prevent it to go below 0). E.g. in m12.6d1.jl.

But that’s probably what you mean by your last sentence above. The transformation component can be directly specified in

``````P = TransformedLogDensity(problem_transformation(p), p)
``````

There may be other reasons to use an intermediate function, but in simple cases, couldn’t it be a direct as()?