I am working on a problem in medical imaging where I have to solve an optimization problem at every voxel (read: pixel) that is not background. The optimization problem is relatively simple, it is a system of three non-linear—basically polynomial—equations with three unknowns where the values are bounded in certain (fairly large) ranges. The problem is that there are one million+ voxels need to be solved for.
Previously I was using python’s scipy implementation of least squares to solve the problem at every voxel; however, even with parallelization this takes hours. I was hoping to speed things up by changing to julia where there isn’t so much overhead with for loops.
I looked in two directions: JuMP and NLsolve. JuMP appears to be slightly difficult to update the constraints necessary to solve the system of equations, it looks like I would use the
fix function to change constant values (corresponding to voxel intensities) in the constraints. I would appear that there would be considerable overhead in this as well as the call tto the external solver. I may be completely wrong though.
The other direction was NLsolve, but NLsolve’s
mcpsolve does not enforce constraints (see here). It is really fast, but the values stray outside of the bounds in my equations resulting in non-sense, unfortunately.
Does anyone have any idea about a good way to formulate the problem such that there is a reasonably computationally efficient solution? Or is this all totally misguided?
Thanks for any help!