Hi, I’m in an optimization situation where I have an objective function that expects a vector input, but the optimization problem is over just one variable, so I’d like to use Brent’s algorithm. To make it compatible with my function, I simply use x -> test_vector([x])
in the below, which seems to add a lot of allocations. Is there a way to avoid this? (In my use-case, I have something that looks like test_vector
, not test_scalar
). See below MWE
using BenchmarkTools
using Optim
test_scalar(x::Real) = x^2
function optims_scalar()
ress = 0.
for i in 1:10
ress += Optim.optimize(test_scalar, 0.0, 1.0, Brent()).minimizer
end
return ress
end
test_vector(x::AbstractVector) = @inbounds x[1]^2
function optims_vector()
ress = 0.
for i in 1:10
ress += Optim.optimize(x -> test_vector([x]), 0.0, 1.0, Brent()).minimizer[1]
end
return ress
end
function run_tests()
display(@benchmark optims_scalar())
display(@benchmark optims_vector())
end
run_tests()
gives
BenchmarkTools.Trial: 10000 samples with 1 evaluation per sample.
Range (min … max): 12.000 μs … 49.500 μs ┊ GC (min … max): 0.00% … 0.00%
Time (median): 12.125 μs ┊ GC (median): 0.00%
Time (mean ± σ): 12.383 μs ± 1.590 μs ┊ GC (mean ± σ): 0.00% ± 0.00%
█▇▆▄▄▃▁▂▃▂ ▂
████████████▇▇▇▆▅▆▅▆▅▅▃▆▄▄▃▄▅▅▄▅▅▄▅▃▄▆▁▅▄▅▅▄▃▄▄▁▃▆▅▅▅▅▅▆▅▆▇ █
12 μs Histogram: log(frequency) by time 17.4 μs <
Memory estimate: 1.72 KiB, allocs estimate: 20.
BenchmarkTools.Trial: 10000 samples with 1 evaluation per sample.
Range (min … max): 17.416 μs … 32.571 ms ┊ GC (min … max): 0.00% … 99.88%
Time (median): 18.334 μs ┊ GC (median): 0.00%
Time (mean ± σ): 22.684 μs ± 326.642 μs ┊ GC (mean ± σ): 16.69% ± 2.19%
▇█ ▁ ▄▂
▅██▇▇█▅▄▃▂██▆▅▇▄▃▃▃▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▁▂▂▂▂ ▃
17.4 μs Histogram: frequency by time 27 μs <
Memory estimate: 47.97 KiB, allocs estimate: 760.
fwiw, I’m on v1.10.4 for compatibility reasons