How to map JuMP VariableRef to NLPModelMeta indices?

I am developing a large scale bilevel problem (1M+ variables), and as a first step I am extracting the constraint coefficient matrix of the lower level linear program using NLPModelMeta (see this post). Here is a simple example:

using JuMP
using NLPModels
using NLPModelsJuMP

model = Model()

@variable(model, x₂)  # intentionally defining x₂ first
@variable(model, x₁ ≥ 0)

@objective(model, Min, 2x₁ + 4x₂)

@constraint(model, x₁ + 4x₂ ≥ 3)
@constraint(model, 3x₁ + 2x₂ == 14)

nlp = MathOptNLPModel(model)
x = zeros(nlp.meta.nvar)
A = Matrix(jac(nlp, x)
# ... more code deriving constraint coefficients and bounds vectors to create KKT conditions
julia> A
2×2 Array{Float64,2}:
 2.0  3.0
 4.0  1.0

In this example the entries of A and the constraints indicate that x = [x₂; x₁] , where x₂ is in the first entry. From this I assume that MathOptNLPModel builds the meta data according to the order in which variables were added to the JuMP model (please correct me if I’m wrong!).

My question is this: is there a way to track or map JuMP VariableRefs to the correct indices in the MathOptNLPModel meta data? (Other than tracking the order of variable definitions and their sizes). For example, the meta data already includes helpful indices such as “indices of equality constraints” for NLPModelMeta.jfix. Is it possible to get the indices of x₂?

(Note that for this trivial example it is easy to determine which indices of the meta data map to the model variables. However, my case will contain millions of variables, some of them vectors of ~10,000 entries.)

I believe we follow the order of all_variables(model). all_variables belong to either JuMP or MOI.
@amontoison, can you confirm?

Yes, you’re right Abel. @nlaws if you want to know the mapping between VariableRefs and indices in a MathOptNLPModel you just need that :
vr = all_variables(model)
indices = [vr[i].index.value for i=1:length(vr)]