Q: Optimization.jl | Passing fixed parameters to an analytic derivative [gradient] for univariate [multivariate] optimization?

The Optimization.jl documentation states " Defining gradients can be done in two ways. One way is to manually provide a gradient definition in the OptimizationFunction constructor. However, the more convenient way to obtain gradients is to provide an AD backend type."

As I have worked out the analytic gradient, using Symbolics.jl, I would like to use it rather than perform automatic differentiation.

Reading the OptimizationFunction constructor specification, it is far from clear to me how to do this.

So, in the following MWE, how does one replace Optimization.AutoForwardDiff() in OptimizationFunction with the analytic gradient?

# test_scalar_Optimization.jl

using Optimization
using OptimizationOptimJL
using ForwardDiff

function f(x, p)
    f_x = (x[1] - p[1])^2
    return f_x
end

# Analytic gradient
function grad(G, x, p)
    G[1] = 2.0*(x[1] - p[1])
    return G[1]
end

begin
    x0 = zeros(1)
    p = [1.0]

    opt_f = OptimizationFunction(f, Optimization.AutoForwardDiff())
    opt_prob = OptimizationProblem(opt_f, x0, p)
    sol = solve(opt_prob, Optim.BFGS())
end

AutoModelingToolkit is equivalent to using Symbolics.jl for symbolic gradients.

Thank you for your suggestion.
Will read AutoModelingToolkit documentation and give it a go.