and i get this warning : Warning: Problem status INFEASIBLE; solution may be inaccurate.
and the solution [NaN, NaN, NaN]

But when i solve the unconstrained problem A\b i get a very good fit to the experimental data and if i plot the curve it makes sense with the MATLAB results, just that the coefficients i get are out of what would be considered fisically impossible. (Thats why the constraints)

I have tried to understand how to use Optim.jl for this problem or LsqFit.jl but I do not find the syntax how to apply it to my problem.

If anyone has a solution or could explain me how to apply it to my problem it would really help.

Hello @lrnv and thank you for your reply, sorry I miss wrote the condition when i typed the post, even with the conditions correctly set i get the same warning. I think I may be able to get an easier solution using another method maybe?

Could you give use a Minimal working exemple ? For the moment i do not have the values of A0,…A4 and therefore i cannot run your code to try to understand what happend.

Just for good measure I can also get some fit (although maybe not a particularly good one) when generating data from your “true” fit and adding constraints:

It looks like from your file that values in A and B are rather large (~1e7). I bet that this is the reason Convex.jl is confused. Can you try again, but diving both A and b by, say, mean(abs.(b)) before handing them to the solver ? Does that help ?

Here is a variant with Optim.jl, where I adopted the example from here to your data:

using DelimitedFiles, Optim, LinearAlgebra
A = readdlm("A.txt")
b = readdlm("b.txt")
f(x) = 0.5 * norm(A*x-b)^2
function grad_f!(g, x)
g .= A'*(A*x-b)
return g
end
function hess_f!(h, x)
h .= A'A
return h
end
# a point within the constraints
x0 = [6.0, 0.6, 16.0]
df = TwiceDifferentiable(f, grad_f!, hess_f!, x0)
lx = [5.0, 0.5, 15.0]
ux = [10.0, 1.0, 25.0]
dfc = TwiceDifferentiableConstraints(lx, ux)
res = optimize(df, dfc, x0, IPNewton())
println(res.minimizer) # still at x0 with your data :/

(I hope I did not make a mistake in gradient and Hessian, but I am relatively sure they are correct).

This might be a starting point, but it also does not find a reasonable minimiser, since it never moves away from the initial point (which I randomly chose within the contraints). So maybe the idea by @lrnv is correct to rescale your problem, the range (0 - 1.2e8) seems quite large currently, especially compared to the constraints.

Of course, wildly differing scales within the same problem can sometimes cause conditioning problems (although these won’t be solved by simply re-scaling the problem) and there can always be overflow/underflow hazards. Those concerns aside, I wouldn’t expect most solvers to show significant sensitivity to the mere scaling of a problem. The algorithms are the same at any scale, they just might need to adjust their internal scales and initialization for things like constraints accordingly.

If scaling fixes your issues, great. But in that case you might consider opening an issue with the solver. It should be able to handle the scales in question. If nothing else, it should be able to re-scale the problem itself (although there’s likely something better it could do).

This should be norm(A*x-b)^2 (at which point you might just do sum(abs2,A*x-b) (EDIT: I forgot the scale of 1/2 out front – add that too). As written, f isn’t everywhere differentiable and the gradient and hessian are wrong. That could be part of the problem.

Thanks for noticing, there was also a 1/2 missing upfront. I am sure gradient and Hessian are correct to the fixed f.

Concerning the scales – well, with such a large range to 1e8 we are at the size of the sort of machine precision. So that might be a problem, but nothing a solver internally should fix, so I do think we are in the area of hazards.

With the right cost f in Optim.jl (see fixed code above), the solver actually moves a bit from

x0 = [6.0, 0.6, 16.0] # with f(x0) being 3.959441234211307e17

Hi @kellertuer, thank you very much for your help!

This does work! I still think there is something weird with the conditions here because the solution changes with lx[1] to find the lowest possible value for x[1] and it does not convince me because the other results I already have are rather close to 10.

I will review the data processing anyway to check that there are no issues with my data that could make this happen.

Well, @mikmoore wrote my gradient and hessian are wrong – I think they should be right now that f is fixed, but maybe he has some more details why they might be wrong.

But let me know whether also your data processing changes something there.

After that fix to the function value, all the derivatives look correct now.

I notice that you aren’t setting a solution tolerance – this is probably something you should do for most solves you care about. You’ll have to look into the docs for the syntax, but it looks like you might be looking for something like an absolute error on x of 1e-5 (or 1e-10, whatever you want so long as it’s notably bigger than machine precision on the final values of x – in this case probably don’t try much smaller than 1e-12). I don’t know whether you care about the tolerance on f.

You should also experiment with scaling A and b to be smaller, as others suggested. The solver had ought to be agnostic to this but that doesn’t mean it is.

I generated some fake data and compared the results of your proposal and A\b and it works! So thank you!
So there is definitely something wrong with my data, just need to figure out what now .

Although the solver does something a bit weird when i set a column of A to 0, but I guess that is normal?

Sorry to resume this post, just wanted to mention an alternative way to solve the problem using GModelFit.jl v0.1.1:

using DelimitedFiles, GModelFit
A = readdlm("A.txt")
b = readdlm("b.txt")
# Create domain and model
dom = Domain(length(b))
model = Model(dom,
GModelFit.FCompv(x -> A*x - b, # mathematical model: Ax - b
[7, 1, 20])) # initial guess values
# Set constraints
model[:main].p1.low = 5
model[:main].p1.high = 10
model[:main].p2.low = 0.5
model[:main].p2.high = 1
model[:main].p3.low = 15
model[:main].p3.high = 25
# Create a dummy data set with zeros
dummydata = Measures(dom, fill(0., length(dom)), 1.)
# Fit model (i.e. find the x such that Ax-b ~ 0 using nonlinear least-squares minimization)
best, fitstats = fit(model, dummydata)
# Print best fit values
println(best[:main].p1.val)
println(best[:main].p2.val)
println(best[:main].p3.val)