Suppose we would like to solve the following problem :

min ||Ax - b||_2^2, s.t. x.>0

using OptimizationOptimJl.

One way to write it would be as follows:

```
using Optimization, OptimizationOptimJL
function obj_f(x,p)
A = p[1]
b = p[2]
return sum((A* x - b).^2)
end
function solve_problem(A,b)
optf = Optimization.OptimizationFunction(obj_f,Optimization.AutoForwardDiff())
prob = Optimization.OptimizationProblem(optf, ones(size(A, 2)), (A,b), lb=zeros(size(A, 2)), ub=Inf * ones(size(A, 2)))
x = OptimizationOptimJL.solve(prob, OptimizationOptimJL.LBFGS(), maxiters=5000, maxtime=100)
return x
end
```

The code above works, but it has a big drawback;

The first time you run it, it takes forever.

E.g., in my machine, running the command:

`@time solve_problem(rand(10,100),rand(10))`

for the first run:

`95.392393 seconds (7.30 M allocations: 638.607 MiB, 0.29% gc time, 99.61% compilation time)`

and for the second run:

`1.916750 seconds (334.38 k allocations: 819.706 MiB, 1.86% gc time)`

.

Is there anything that can be improved to mitigate this delay?

Any further performance tips on the code above would be more than welcome.

Iâ€™m aware that JuMP or another package would probably be more suitable for the job, but this is to be implemented in a package where OptimizationOptimJl is already a dependency for other reasons, so Iâ€™d like to avoid adding further dependencies if possible.