Scipy using PyCall in Julia

I have a moderately large system of non-linear equations that I want to solve using scipy.optimize in Julia. The problem is that I store the equations in a vector before I pass them to the solver and PyCall doesn’t accept that. For example these methods both work:

using PyCall
@pyimport scipy.optimize as so

function F(x)
 f1=1- x[1] - x[2]
 f2=8 - x[1] - 3*x[2]
 return f1, f2

x0 = [1,1]
x = so.fsolve(F, x0)

function G(x)
 f=[1 - x[1] - x[2],
    8 - x[1] - 3*x[2]]
 return f

x0 = [1,1]
x = so.fsolve(G, x0)

However these do not:

function H(x)
 f[1]=1 - x[1] - x[2]
 f[2]=8 - x[1] - 3*x[2]
 return f

x0 = [1,1]
x = so.fsolve(H, x0)

function P(x)
 f[1]= 1 - x[1] - x[2]
 f[2]= 8 - x[1] - 3*x[2]
 return f[1], f[2]

x0 = [1,1]
x = so.fsolve(P, x0)

I don’t think it’s feasible to not use a loop because of the nature of the problem. Is there any way to return the vector in a way that fsolve can accept it?

I’m fairly new to Julia and I want to translate some MATLAB files in order to get a sense of the language. I have tried the NLsolver, but the system doesn’t converge even though I give the solution that I obtained from MATLAB as an initial guess.

You have now cross-posted in 3 places, even after the answer was already given.

What was the NLSolve code?

Yes, sorry about that. I posted before you gave me the answer.

I was thinking about copying and pasting your answer in case others have the same question but I wasn’t sure if that was polite even if I gave credit.

Should I delete the questions?


The NLSolver wasn’t converging with my full code, not the example that I posted. There is a chance that I made a typo while I was translating but it has to be something very minor because the iterations are close, but not converging.

That’s fine. I forgot that new users have a delay. I just saw that this was like 10 minutes after I already posted the answer… in two places :slight_smile:

I would check for a typo. Or try using a different algorithm. It defaults to a trust-region algorithm so try calling the Newton algorithm. If two separate algorithms diverge when starting right next to the solution, that’s definitely user error. If only one diverges when starting right next to the solution, then this is a bug that should be reported.

I think I found what the issue is. Both NLSolve (with both algorithms) and scipy’s fsolve at some point try a negative value for some of the variables that have an exponent different than 2 and Julia throws an error. This happens even if I use mcpsolve(f!, [lower bounds ], [upper bounds], [x0]).

Because of the nature of the problem, I change the parameters of the model multiple times and some times MATLAB does this too, however it doesn’t throw an error and the results are complex numbers. The thing is that I want a real solution and I imposed a break in MATLAB in case this happens so I don’t see it as a solver’s weakness, instead I wish Matlab’s fsolve had the same behaviour or at least a built-in option.

However, I find it strange that since I’m so close to the solution, the solvers try a negative value after a few iterations (even though some variables hover around zero), so I need to triple check for typos.

I’m not sure if this is relevant, but I have an fsolve inside an fsolve because I want the solution of the one system to satisfy some constraints which are too complicated to use them in a typical constraint maximazation problem (this is standard practice in my area, so I didn’t come up with this). So the initial fsolve tries values that need to converge to the solution, which serve as parameter values for a nested fsolve, whose solution affects the value of the initial function. Both fsolve need to converge in order to find the solution.