Improving performance of item response model in Turing.jl

Yes, this is the same model. I am aware of the LazyArray “trick”, but unfortunately it does not work for this particular model. In the thread I linked in the initial post a possible regression was discussed (#122) but then this should affect all models using LazyArray, no? I then just followed what @dlakelan described and summed everything manually using @addlogprob!.

Here is the benchmark for @addlogprob! vs LazyArray version of the model (I had to reduce to P = 100 because of runtime):

@model function irt_lazy(y, i, p; max_i=maximum(i), max_p=maximum(p))
    theta ~ MvNormal(I(max_p))
    beta ~ MvNormal(I(max_i))
    y ~ arraydist(LazyArray(@~ BernoulliLogit.(theta[p] - beta[i])))
end

@benchmark sample($m3, $alg, $n_samples, init_params=$zeros(120)) 
BenchmarkTools.Trial: 1 sample with 1 evaluation.
 Single result which took 10.272 s (0.72% GC) to evaluate,
 with a memory estimate of 1.06 GiB, over 4902514 allocations.

@benchmark sample($m_lazy, $alg, $n_samples, init_params=$zeros(120)) 
BenchmarkTools.Trial: 1 sample with 1 evaluation.
 Single result which took 350.161 s (0.02% GC) to evaluate,
 with a memory estimate of 1.24 GiB, over 5181966 allocations.

So the LazyArray version takes 35 times longer than the @addlogprob! version.

1 Like