I know that the first time running JuMP takes more time due to compilation issues but I was expecting roughly the same running time after that, however as it can be seen below the “variance” is quite high.
Does someone know why this is happening?
Thanks a lot!
You can look at the result of
solve_time(model) to see if the difference is time spent in Julia or Ipopt.
Ultimately though, you’ll need to profile to see where the time is being spent.
Two useful tools for this are:
Computing times can be (heavily!) impacted by other tasks running on the same machine.
I’ve seen speedup/slowdowns as high as 6x just by having other programs running.
In addition to odow’s comments, I would recommend, as much as possible, to avoid other tasks running. This is best done by requesting a full node on a computing cluster – if you have access to that. If you’re running on your laptop / desktop machine: stop other applications, and try to use the machine as little as possible while your program is running. That might help reduce the variance.
are these starting from random initial points? If so that’s likely the cause of the variation.
Right, with 8 cores on my laptop I would not expect interference given that other possible tasks were possibly googling something but thanks for the comment. I was not aware of the potential severity of it.
No, then it would be trivial why we have different running times.