Hi
I am trying to use FiniteDiff package to compute a gradient of a function with an N-vector as input and a scalar as output. I need to write the gradient (an N-vector) to a pre-allocated vector (supplied by an optimization solver).
However, the FiniteDiff.finite_difference_jacobian! function will only write to a pre-allocated 1xN 2D array as the Jacobian matrix. How can I wrap the supplied 1D gradient vector by the optimization solver in a 1xN array to trick the finite_difference_jacobian! function into thinking it is getting a reference to a pre-allocated Jacobian matrix of the correct size, without new allocation and copying?