Is there any way to emulate MATLAB's Isqlin in Julia?

Hi,

I am trying to solve a box constrained least squares problem.

I have the classic Ax = b problem where x are the parameters I want to find, and A and b are full of experimental data.

I have tried:

using Convex, SCS
b = A0
A = hcat(A1,A3,A4)
x = Variable(3)

# constraints 1
add_constraint!(x, x[1] ≤ +10) 
add_constraint!(x, x[1] ≥ +5)
# constaints 2 
add_constraint!(x, x[2] ≤ +1) 
add_constraint!(x, x[2] ≥ +0.5)
# constaints 3
add_constraint!(x, x[3] ≤ +25) 
add_constraint!(x, x[3] ≥ +15)

problem = minimize(sumsquares(A*x-b))
solve!(problem, SCS.Optimizer)

where A and b are GitHub - Boladecrema/Datafiles

and i get this warning : Warning: Problem status INFEASIBLE; solution may be inaccurate.
and the solution [NaN, NaN, NaN]

But when i solve the unconstrained problem A\b i get a very good fit to the experimental data and if i plot the curve it makes sense with the MATLAB results, just that the coefficients i get are out of what would be considered fisically impossible. (Thats why the constraints)

I have tried to understand how to use Optim.jl for this problem or LsqFit.jl but I do not find the syntax how to apply it to my problem.

If anyone has a solution or could explain me how to apply it to my problem it would really help.

Thank you

Edit : Sorry i missread your post.

Your problem is indeed infeasible:

add_constraint!(x, x[3] ≤ +15) 
add_constraint!(x, x[3] ≥ +25)

x[3] cannot be smaller than 15 and greater than 25. I bet that inverting the two constraints is what you wanted. The following will probably work :

add_constraint!(x, x[3] ≤ +25) 
add_constraint!(x, x[3] ≥ +15)
2 Likes

Hello @lrnv and thank you for your reply, sorry I miss wrote the condition when i typed the post, even with the conditions correctly set i get the same warning. I think I may be able to get an easier solution using another method maybe?

Could you give use a Minimal working exemple ? For the moment i do not have the values of A0,…A4 and therefore i cannot run your code to try to understand what happend.

2 Likes

No it does not as i cannot use this to reproduce the matrix locally :wink:

On my machine, The following works and solves correclty :

using Convex, SCS

A = randn((1501,3))
b = randn(1501)
x = Variable(3)

# constraints 1
add_constraint!(x, x[1] ≤ +10) 
add_constraint!(x, x[1] ≥ +5)
# constaints 2 
add_constraint!(x, x[2] ≤ +1) 
add_constraint!(x, x[2] ≥ +0.5)
# constaints 3
add_constraint!(x, x[3] ≤ +25) 
add_constraint!(x, x[3] ≥ +15)

problem = minimize(sumsquares(A*x-b))
solve!(problem, SCS.Optimizer)

And i am pretty sure whatever A,b it should work. So i dont know what happend in your case, i’m sorry.

Thank you, I imagined. Your code works for me too. :frowning:

Do you know where I can find documentation about the syntaxis for this problem in LsqFit or similar?

From the information in the README I can not understand

Just for good measure I can also get some fit (although maybe not a particularly good one) when generating data from your “true” fit and adding constraints:

julia> A = rand(1501, 3);

julia> x̄ = [-9.127, 0.955, 19.499];

julia> b = A*x̄;

julia> using Convex, SCS

julia> x = Variable(3);

julia> add_constraint!(x, x[1] ≤ +10);

julia> add_constraint!(x, x[1] ≥ +5);

julia> add_constraint!(x, x[2] ≤ +1);

julia> add_constraint!(x, x[2] ≥ +0.5);

julia> add_constraint!(x, x[3] ≤ +25);

julia> add_constraint!(x, x[3] ≥ +15);

julia> problem = minimize(sumsquares(A*x-b));

julia> solve!(problem, SCS.Optimizer);
------------------------------------------------------------------
               SCS v3.2.1 - Splitting Conic Solver
        (c) Brendan O'Donoghue, Stanford University, 2012
------------------------------------------------------------------
problem:  variables n: 6, constraints m: 1513
cones:    z: primal zero / dual free vars: 1
          l: linear vars: 7
          q: soc vars: 1505, qsize: 2
settings: eps_abs: 1.0e-004, eps_rel: 1.0e-004, eps_infeas: 1.0e-007
          alpha: 1.50, scale: 1.00e-001, adaptive_scale: 1
          max_iters: 100000, normalize: 1, rho_x: 1.00e-006
          acceleration_lookback: 10, acceleration_interval: 10
lin-sys:  sparse-direct-amd-qdldl
          nnz(A): 4515, nnz(P): 0

(...)

------------------------------------------------------------------
status:  solved (inaccurate - reached max_iters)
timings: total: 4.60e+000s = setup: 5.64e-003s + solve: 4.59e+000s
         lin-sys: 2.52e+000s, cones: 4.28e-001s, accel: 5.34e-001s
------------------------------------------------------------------
objective = 23871.672650 (inaccurate)
------------------------------------------------------------------
┌ Warning: Problem status ALMOST_OPTIMAL; solution may be inaccurate.
└ @ Convex C:\Users\ngudat\.julia\packages\Convex\ukggP\src\solution.jl:342

julia> evaluate(x)
3-element Vector{Float64}:
  7.553048090840104
 -0.001695272361063642
 14.328590811335083

If you do want to share your data you can just write it out into a txt file with

julia> using DelimitedFiles

julia> writedlm("A.txt", A)

and then throw it onto GitHub or some other publicly accessible place.

Thank you, here are the files with the matrix A and the vector b. GitHub - Boladecrema/Datafiles

@nilshg thank you for your help. It does work for me too, it seems like it only does not work with my data.

It looks like from your file that values in A and B are rather large (~1e7). I bet that this is the reason Convex.jl is confused. Can you try again, but diving both A and b by, say, mean(abs.(b)) before handing them to the solver ? Does that help ?

1 Like

Here is a variant with Optim.jl, where I adopted the example from here to your data:

using DelimitedFiles, Optim, LinearAlgebra
A = readdlm("A.txt")
b = readdlm("b.txt")

f(x) = 0.5 * norm(A*x-b)^2

function grad_f!(g, x)
    g .= A'*(A*x-b)
    return g
end

function hess_f!(h, x)
    h .= A'A
    return h
end

# a point within the constraints
x0 = [6.0, 0.6, 16.0]

df = TwiceDifferentiable(f, grad_f!, hess_f!, x0)

lx = [5.0, 0.5, 15.0]
ux = [10.0, 1.0, 25.0]

dfc = TwiceDifferentiableConstraints(lx, ux)
res = optimize(df, dfc, x0, IPNewton())
println(res.minimizer) # still at x0 with your data :/

(I hope I did not make a mistake in gradient and Hessian, but I am relatively sure they are correct).

This might be a starting point, but it also does not find a reasonable minimiser, since it never moves away from the initial point (which I randomly chose within the contraints). So maybe the idea by @lrnv is correct to rescale your problem, the range (0 - 1.2e8) seems quite large currently, especially compared to the constraints.

edit: fixed f.

1 Like

Of course, wildly differing scales within the same problem can sometimes cause conditioning problems (although these won’t be solved by simply re-scaling the problem) and there can always be overflow/underflow hazards. Those concerns aside, I wouldn’t expect most solvers to show significant sensitivity to the mere scaling of a problem. The algorithms are the same at any scale, they just might need to adjust their internal scales and initialization for things like constraints accordingly.

If scaling fixes your issues, great. But in that case you might consider opening an issue with the solver. It should be able to handle the scales in question. If nothing else, it should be able to re-scale the problem itself (although there’s likely something better it could do).

This should be norm(A*x-b)^2 (at which point you might just do sum(abs2,A*x-b) (EDIT: I forgot the scale of 1/2 out front – add that too). As written, f isn’t everywhere differentiable and the gradient and hessian are wrong. That could be part of the problem.

1 Like

Thanks for noticing, there was also a 1/2 missing upfront. I am sure gradient and Hessian are correct to the fixed f.

Concerning the scales – well, with such a large range to 1e8 we are at the size of the sort of machine precision. So that might be a problem, but nothing a solver internally should fix, so I do think we are in the area of hazards.

With the right cost f in Optim.jl (see fixed code above), the solver actually moves a bit from

x0 = [6.0, 0.6, 16.0] # with f(x0) being 3.959441234211307e17

to

julia> rees.minimizer
[5.0000000000028555, 0.9204942748779907, 18.913210439100894]
julia> f(res.minimizer)
7.662276116069474e14

Hi @kellertuer, thank you very much for your help!

This does work! I still think there is something weird with the conditions here because the solution changes with lx[1] to find the lowest possible value for x[1] and it does not convince me because the other results I already have are rather close to 10.

I will review the data processing anyway to check that there are no issues with my data that could make this happen.

1 Like

Well, @mikmoore wrote my gradient and hessian are wrong – I think they should be right now that f is fixed, but maybe he has some more details why they might be wrong.

But let me know whether also your data processing changes something there.

1 Like

After that fix to the function value, all the derivatives look correct now.

I notice that you aren’t setting a solution tolerance – this is probably something you should do for most solves you care about. You’ll have to look into the docs for the syntax, but it looks like you might be looking for something like an absolute error on x of 1e-5 (or 1e-10, whatever you want so long as it’s notably bigger than machine precision on the final values of x – in this case probably don’t try much smaller than 1e-12). I don’t know whether you care about the tolerance on f.

You should also experiment with scaling A and b to be smaller, as others suggested. The solver had ought to be agnostic to this but that doesn’t mean it is.

2 Likes

Sure tolerances is something that one could play with. The default might just be something that “works reasonably in most cases”.

Hi! @kellertuer

I generated some fake data and compared the results of your proposal and A\b and it works! So thank you!
So there is definitely something wrong with my data, just need to figure out what now :smiling_face_with_tear: .

Although the solver does something a bit weird when i set a column of A to 0, but I guess that is normal?

A whole column to zero should be strange, yes, since that basically excludes a variable x[i] – a row on the other hand should be fine.

Instead of setting that to zero I would more like shorten A by that column as well as x (and the constraints).

Thanks! Yes i did do that! I was just testing different cases

Sorry to resume this post, just wanted to mention an alternative way to solve the problem using GModelFit.jl v0.1.1:

using DelimitedFiles, GModelFit
A = readdlm("A.txt")
b = readdlm("b.txt")

# Create domain and model
dom = Domain(length(b))
model = Model(dom,
              GModelFit.FCompv(x -> A*x - b, # mathematical model: Ax - b
                               [7, 1, 20]))  # initial guess values

# Set constraints
model[:main].p1.low  = 5
model[:main].p1.high = 10

model[:main].p2.low  = 0.5
model[:main].p2.high = 1

model[:main].p3.low  = 15
model[:main].p3.high = 25

# Create a dummy data set with zeros
dummydata = Measures(dom, fill(0., length(dom)), 1.)

# Fit model (i.e. find the x such that Ax-b ~ 0 using nonlinear least-squares minimization)
best, fitstats = fit(model, dummydata)

# Print best fit values
println(best[:main].p1.val)
println(best[:main].p2.val)
println(best[:main].p3.val)
1 Like