Hi, I’m wondering if there’s a better way to get the gradient of a function like this one:
using NonNegLeastSquares, DifferentiationInterface
import FiniteDiff
function f(aa, b)
A = reshape(aa, size(b, 1), :)
x = nonneg_lsq(A, b)
return sum(abs2, b .- A * x)
end
A = rand(5, 3)
aa = vec(A)
B = rand(5, 10)
gradient(aa -> f(aa, B), AutoFiniteDiff(), aa)
This works with FiniteDiff, but not other backends, due to Dual/Float64 conversion issues.
My objective is to optimize a set of non-negative basis vectors (i.e. A
), as part of a more complex optimization model. For instance, the real function would be something like:
function f1(u, b)
aa = complicated_computation(u)
return f(aa, b)
end