Hello,
Given an expensive function \boldsymbol{G} :\mathbb{R}^n \xrightarrow{} \mathbb{R}^m whose value is known at samples \boldsymbol{x}^1,\ldots, \boldsymbol{x}^N, I would like to compute the Jacobian and Hessian at each sample or at least at the mean of the samples \overline{\boldsymbol{x}}, without new evaluations of \boldsymbol{G}.
The package https://github.com/Mattriks/CoupledFields.jl can be a solution, @Mattriks can you give me more background on the techniques used to construct these gradients?
I would like to get your feedback on the following technique that relies on smooth delta functions:
Letās start with the continuous form:
Thus we can get the Jacobian at \overline{\boldsymbol{x}} as:
We can replace the Dirac delta function by a smoothed version \delta_h with a compact support.
The function \delta(\boldsymbol{z}) can be factorized as a product of one-dimensional delta functions \delta(\boldsymbol{z}) = \prod_{i=1}^{n} \delta(z_i), (also holds for \delta_{h}). Therefore we can easily construct the derivative kernels \delta^{(1)}_i(\boldsymbol{z}) = \partial \delta(\boldsymbol{z})/{\partial z_i} and \delta^{(2)}_{ij}(\boldsymbol{z}) = \partial^2 \delta(\boldsymbol{z})/\partial z_i\partial z_j.
For our discrete samples, we get:
where h is set according to the spread of the samples about the \overline{\boldsymbol{x}}