So I am trying out the awesome DynamicHMC.jl
package and as such need to wrap my head around using ContinousTransformations.jl
to make sure my priors are handled correctly.
My current issue is how to combine transformations conveniently. What I want:
For my P = 10
element long vector of \beta’s the first must be positive, but the rest are unbounded. And the last parameter is a variance so much be positive as well.
Currently I can’t figure out how to do this. I have:
θ_transform = TransformationTuple(ArrayTransformation(bridge(ℝ, ℝ), P), bridge(ℝ, ℝ⁺))
Which is okay, but doesn’t give the first list transform to be non-negative. If I try to splice in one more like the last, it makes parsing the arguments tricky … Any tips?
The full code this is embeded in is at: https://github.com/gabrielgellner/LearnHMC/blob/master/src/test.jl
If I get the example correctly, you need something like
using ContinuousTransformations
θ_transform = TransformationTuple(bridge(ℝ, ℝ⁺) # β₁
ArrayTransformation(IDENTITY, 9), # β rest
bridge(ℝ, ℝ⁺), # σ
bridge(ℝ, ℝ⁺)) # ν
function (prob::EcoRegProblem)(θ)
β₁, βrest, σ, ν = θ
β = vcat(β₁, βrest)
...
end
There is no API for combining already transformed elements, as that can be done easily in user code like above.
An API rewrite is due once v0.7
comes out, as named tuples will make everything super convenient. If you want other features, suggestions are welcome (as issues, preferably).
1 Like
Thank you so much. Now I just need to figure out why the sampler is doing so badly! Really like the packages and will file issues as I find them. With the named tuples will it be easier to make combined versions like the above? Though the vcat-ing is easy, it is a bit annoying to have to always do it
NamedTuple
s will just make it easier to extract variables from transformations from \mathbb{R}^n that return a tuple of values, as the user won’t have to remember positions.
Concatenation functionality is not planned, as it is just available in Base
. ContinuousTransformations.jl helps you with transformations and derivatives, which are not relevant for vcat
. I should probably add an example though.
I could clean up one of the linear regression examples I have messed with and do a PR for the examples repo if that is of use / interest.
Can you explain why \beta_1 > 0? My concern would be having mass near 0 and transformations not handling that well (actually they don’t allow 0, \mathbb{R} is mapped to open intervals).
I was just quickly trying to port over this tutorial: Jupyter Notebook Viewer
Though I imagine the issues are simply that I didn’t do the correct representation of the model. I can actually see that the “incorrect” model that I am using, actually doesn’t constrain the \beta_1 so I will fix that up, but will need it for when I do the “correct” model.
I would recommend something from the Bayesian Data Analysis book by Gelman et al, or the Gelman & Hill book. Both are listed in the package docs. Or some model from the Stan examples.
That said, the setup I recommended seems to be an analogue of the Stan solution you linked.
Yeah the way you solve it, versus the Stan modelling language is very close, which is nice. This tutorial is linked from the Stan webpage, and it is nice to try and get slightly more complex models working. I have some simple ones running along nicely, but will want to get something uglier like this working to see if I can use this for my true model which is a beast