Some question about the structure of Turing models

Both iid and For should be fine for general-purpose use. For is especially flexible - it can take an Int, or a Base.Generator, or an Array. In the last case it acts like a MappedArray.

My biggest current concern is that this flexibility currently comes from some redundancy in the methods. We should be able to clean this up a bit. And there’s probably still some room for performance improvement.

1 Like

Hi

I see that there are other replies later on, but let me just say one thing.

As a Computer Scientist, I feel that while a functional notation is nice and useful, in general “moving closer to mathematical notation” is a Bad Thing™.

The notion of Module in Mathematica™ was a source of several problems in the early edition of the tool; the result of a mathematician doing “language design”.

In the example you write you have a free variable/identifier

y[i] ~ Bernoulli(lambda)

which immediately raises the “global” red flag to the eye of the programming language designer. The same happens in your “more complex example”.

So, I’d say no. Your “closer to mathematical notation” example are actually more confusing for the regular guy who wants to use Turing. That’d be me.

I don’t see a free variable here. In Turing, y is bound as an argument to the model, and i is bound as the loop index. The only global I see here is Bernoulli.

I can’t help feeling you’re playing both sides here. A minute ago you were a programming language designer, and now a “regular guy”.

I think you’re not giving the Turing team nearly enough credit. Anyone building a PPL is a computer scientist and a programming language designer (the latter at least in the EDSL sense). I’d encourage you to consider this in your critique.

1 Like

Hi

the i is not bound in the proposed example in Turing. I was responding to @trappmartin and his example was

@model function test(y)
    λ ~ Exponential(α)
    y[i] ~ Bernoulli(λ)
end

I am giving the Turing team all the credit possible. I macroexpanded the model macro and I saw the sophistication of the Turing’s guts (and Julia’s). That was the way I finally understood what was actually going on.

If I may elaborate on the matter, and I am not a Julia wizard, the example would work much better as

@model function test(y)
    λ ~ Exponential(α)
    y[] ~ Bernoulli(λ)
end

Which is what I suggested. I don’t see why it should be more difficult to parse and “compile”, although, as I said, I may be missing something.

Having said that, I was unaware of Soss until this thread. I will have a look at it as well. Using the pipe operator is nice, and it works “declaratively” enough for my, admittedly difficult, tastes.

1 Like

I see, thanks for clarifying.

It seemed to me the proposed y[i] ~ Bernoulli(λ) was just brainstorming, with notation similar to some tensor libraries like Tullio.jl. I think this could be made to work in Turing, because y is provided as an argument, so its dimensionality is fixed.

In this case the components of y are iid. But in a regression model, this won’t be the case. It could be a bit confusing to have indices in one case and not the other. Also the notation you suggest hides the shape of y, and could easily be confused with getting the value of a Ref.

One point that I haven’t seen mentioned here is that Turing’s model-declaration syntax more or less follows that used in BUGS/JAGS and Stan. That doesn’t mean it’s the best or only way, but it doesn’t come from nowhere; similar modeling languages go back 20+ years in Bayesian computing.

1 Like