Optimizing Large-Scale Problems: One Large Problem vs. Multiple Smaller Ones

Hello :grinning:,
By reading this example of NonlinearSolve.jl, I was asking myself if it was better to define large-scale optimization problem as 1 large problem (f2) or several small problems (f1).

According to the code below, the f2 version is much slower, because (?) the Vector version of myfunc is not allocation-free.

Is it the only reason and is there good practice in this situation (one large problem vs several smaller)?

using NonlinearSolve, BenchmarkTools

const Nt = 100;
levels = 1.5 .* rand(Nt);
out = zeros(Nt);
myfun(x::Number, lv::Number) = x * sin(x) - lv
myfun(x::Vector, lv::Vector) = x .* sin.(x) .- lv

function f1(out, levels, u0)
    for i in 1:Nt
        out[i] = solve(
            NonlinearProblem{false}(NonlinearFunction{false}(myfun), u0, levels[i]),
            SimpleNewtonRaphson()).u
    end
end

function f2(levels, u0)
        solve(
            NonlinearProblem{false}(NonlinearFunction{false}(myfun), u0, levels),
            SimpleNewtonRaphson()).u
end

@btime f1(out, levels, 1.0)
@btime f2(levels, ones(Nt))

Thanks,
fdekerm

Pretty much. You should use SimpleNonlinearSolve.jl with static arrays if it’s small and it should be fully non allocating

1 Like