JuMP: use AD within a JuMP model to define a derivative-based constraint

I have a JuMP (v0.18.5) model with nonlinear constraints, and I’m trying to encode a new constraint which depends on the Jacobian of the nonlinear constraints.

As an example, consider the simpler case of solving a linear system Ax = -b in a weird way: by finding feasibility for a problem with nonlinear constraint f(x) := \tfrac 12 x^{\top} A x + b^{\top} x < 10 and another nonlinear constraint \nabla_x f(x) = 0 where \nabla_x f(x) is found through JuMP’s AD (either ForwardDiff or MathProgBase):

using JuMP
using Ipopt
using MathProgBase
using ForwardDiff
using LinearAlgebra
using Test

## 1/2 xAx + bx
## -----------------------------------------------------------------------------
A = [10.0 1.0; 1.0 9.0]
b = [2.0; 3.0]
tol = 1e-4
m = Model(solver=IpoptSolver())
@variable(m, x[1:2])
@variable(m, zeta[1:2])
setvalue(x, [0.0; 0.0])
@NLobjective(m, Min, sum(zeta[i]^2 for i = 1:2))
@NLconstraint(m, constraint, 0.5 * sum(A[i, j] * x[i] * x[j] for i in eachindex(x)
                                                       for j in eachindex(x))
                                     + sum(b[i] * x[i] for i in eachindex(x)) <= 10)

## AD constraint
g = fill(NaN, 2)
d = JuMP.NLPEvaluator(m)
MathProgBase.initialize(d, [:Jac])
J = spzeros(1,2)
ij_jac = MathProgBase.jac_structure(d)
j = ones(length(ij_jac[1]))
MathProgBase.eval_jac_g(d, j, getvalue(x))
for elem in zip(ij_jac[1], ij_jac[1], j)
    J[elem[1], elem[2]] = elem[3]
end
for i = 1:2
    @NLconstraint(m, J[1,i] == zeta[i])
end

## solve and test
solve(m)
@test norm(getvalue(m[:x]) + A\b) <= tol

It works for me in this case, but I’m wondering if there’s a better way of encoding the AD constraint. In particular, I have a function which computes x' * A * x + b' * x (a stand-in for a more complex expression…) in another JuMP model, but I don’t have a nice expression for its gradient. Is it suggested to

  1. move the expression for x' * A * x + b' * x into its own function
  2. use ForwardDiff to compute its gradient/jacobian/hessian
  3. register (?) the derivatives in my JuMP model

or is there a better way? Thanks!