Hello everyone,

I’m implementing the demand estimation for Berry-Levinsohn-Pakes (1995) for my research, and it would be great to get some help on this. While my code works, it takes a while, and so I was looking to speed it up.

I have about 6600 markets and a contraction mapping has to be done for each market separately . There are about 2-20 products per market. It takes anywhere between 40-200 iterations per market for convergence to happen, and so for a given set of values the whole process takes a while. I was curious to see if anyone had suggestions for how to speed up each iteration.

Here’s my code:

```
ns = size(draws.v1, 1)
function get_params(sigma, draws)
betai = [sigma[1] .* draws.v1 sigma[2] .* draws.v2] # ns x 2 matrix
end
function calc_indiv_util(c)
sum_of_exputils_indiv = 1 + sum(c)
choice_prob_indiv = c ./ sum_of_exputils_indiv
return choice_prob_indiv
end
function calc_predicted_shares(betai, delta, mkt, ns)
mkt_obs = size(mkt, 1)
X2_mkt = [mkt.var1 mkt.var2]
u1 = repeat(delta, outer=(1, ns)) + X2_mkt * betai'
exputils = exp.(u1)
p_ijt = mapslices(calc_indiv_util, exputils, dims=1)
p_jt = mapslices(sum, p_ijt, dims=2) ./ ns
return p_jt
end
function contraction_mapping(betai, ns, data, market_ids, num_mkts, maxtol, maxiter)
tuning_par = 1.0
delta_container = Vector{Any}(missing, num_mkts)
Threads.@threads for i = 1:maximum(data.market_ids)
mkt = data[data.market_ids .== i,:]
local delta = log.(mkt.s_jt ./ mkt.s_0jt)
local iter = 1.0
local tol = 1e6
while ( (tol > maxtol) && (iter < maxiter) )
new_delta = delta + tuning_par .* ( log.(mkt.s_jt) - log.(calc_predicted_shares(betai, delta, mkt, ns)) )
tol = abs(maximum(new_delta .- delta))
delta = new_delta
iter = iter + 1
end
delta_container[i] = delta
end
delta_full = reduce(vcat, delta_container)
return delta_full
end
betai = get_params(sigma, draws)
delta = contraction_mapping(betai, ns, data, market_ids, num_mkts, maxtol, maxiter)
```

EDIT: There is now a MWE in the replies.

I also realize a lot of people might be confused about the reasoning behind the algorithm itself, so my question is asking a lot - but really any general help for speeding up iterations/function calls/variable types that you see here should be helpful. I’d think a decent number of economists use BLP 1995 and/or contraction mapping in their work, so the answers should be helpful to a wide audience. I’d also prefer to use my own code than a package implementing BLP written by someone else, although I’m happy to take a look and pick up ideas.