I’ve got a constrained optimization situation where restarting NLopt from multiple initial points tends to be a good idea. I’d like to simply do @threads
and split them over several threads, but I don’t know if this is safe. I haven’t actually tried it yet, but thought maybe someone would be able to address this. @stevengj or @mlubin perhaps?
Well, I did try running it in multiple threads from different initial points, and it in fact didn’t work and then segfaulted… so I guess the answer is no unless there’s a specific methodology that works.
starting optimization... 1
starting optimization... 3
starting optimization... 5
starting optimization... 7
starting optimization... 9
finished with value: -0.1660709144581413, due to ROUNDOFF_LIMITED
starting optimization... 2
finished with value: 0.16607091445814082, due to ROUNDOFF_LIMITED
starting optimization... 6
finished with value: -0.16608205405870535, due to ROUNDOFF_LIMITED
starting optimization... 4
finished with value: -0.16607091445814082, due to ROUNDOFF_LIMITED
starting optimization... 8
finished with value: -0.16607091445814082, due to ROUNDOFF_LIMITED
starting optimization... 10
finished with value: 0.16229485881648792, due to ROUNDOFF_LIMITED
signal (11): Segmentation fault
Compared with serially:
starting optimization... 1
finished with value: 0.3555415144855803, due to ROUNDOFF_LIMITED
starting optimization... 2
finished with value: 0.328531673996751, due to ROUNDOFF_LIMITED
starting optimization... 3
....
Are you creating independent problems for each task? I would find strange that there was any interaction between them.
I create a single “problem” then an array of initial values then call
initv = initval[i]
optimize!(opt,initv)
which might be the problem right there… as it’s maybe modifying the opt problem?
I’m trying it with an array of opt problems now.
including many more than just the removal of SingleVariable.
Yes, you’ll probably need to create a separate problem for each call.
Ok, it seems it did work if I used a vector of opt problems. This is fantastic as it’s cutting my time to do 10 optimizations by a lot!
the general pattern is
optprob = [...]
init = [...]
result = [...]
Threads.@threads for i in 1:10
...some setup
result[i] = optimize!(optprob[i],init[i])
end
Yes, it should be thread safe if you have different optimization objects for each thread. (For the most part. There might be some problems with individual algorithms in NLopt, I’m not 100% sure without looking through them again in detail.)