# How can I get specific values out of JuMP's variables?

I want to use my own function as the objective function and optimize it using JuMP.jl.
In this function, I use JuMP variables internally as Vector{Float64} to make the calculation more efficient.

However, the type of JuMP variable is VariableRef, and if it cannot be converted to Float64, an error occurs.
How can I retrieve the values?

For example, I assume the following sample program.

``````using JuMP
import Enzyme
import Ipopt
import Test

function f(x)
vec = Vector{Float64}(undef, length(x))
for i in eachindex(vec)
vec[i] = x[i]
end
return (1 - vec[1])^2 + 100 * (vec[2] - vec[1]^2)^2
end

"""
enzyme_derivatives(f::Function) -> Function

Return a tuple of functions that evaluate the gradient and Hessian of `f` using
Enzyme.jl.
"""
function enzyme_derivatives(f::Function)

function ∇f(g::AbstractVector{T}, x::Vararg{T,N}) where {T, N}
g .= Enzyme.autodiff(Enzyme.Reverse, f, Enzyme.Active.(x))[1]
return
end

return ∇f
end

function enzyme_rosenbrock()
model = Model(Ipopt.Optimizer)
set_silent(model)
@variable(model, x[1:2])
@operator(model, op_rosenbrock, 2, f, enzyme_derivatives(f))
@objective(model, Min, op_rosenbrock(x))
optimize!(model)
Test.@test is_solved_and_feasible(model)
return value.(x)
end

s = enzyme_rosenbrock()
display(s)
``````

This is a slightly modified version of the JuMP.jl tutorial.

However, when executed, the following error occurs:

``````ERROR: MethodError: Cannot `convert` an object of type VariableRef to an object of type Float64

Closest candidates are:
convert(::Type{T}, ::T) where T
@ Base Base.jl:84
convert(::Type{T}, ::CartesianIndex{1}) where T<:Number
@ Base multidimensional.jl:135
convert(::Type{T}, ::AbstractChar) where T<:Number
@ Base char.jl:185
...

Stacktrace:
[1] setindex!(A::Vector{Float64}, x::VariableRef, i1::Int64)
@ Base ./array.jl:1021
[2] f
@ ~/my_program/GitHub/JuliaOptOS/tests/JuMP/enzyme_sample_approx_hessian2.jl:10 [inlined]
[3] NonlinearOperator
@ ~/.julia/packages/JuMP/7rBNn/src/nlp_expr.jl:893 [inlined]
[4] macro expansion
@ ~/.julia/packages/MutableArithmetics/SXYDN/src/rewrite.jl:340 [inlined]
[5] macro expansion
@ ~/.julia/packages/JuMP/7rBNn/src/macros.jl:257 [inlined]
[6] macro expansion
@ ~/.julia/packages/JuMP/7rBNn/src/macros/@objective.jl:66 [inlined]
[7] enzyme_rosenbrock()
@ Main ~/my_program/GitHub/JuliaOptOS/tests/JuMP/enzyme_sample_approx_hessian2.jl:36
[8] top-level scope
@ ~/my_program/GitHub/JuliaOptOS/tests/JuMP/enzyme_sample_approx_hessian2.jl:42
``````

Hi @Topo!

Why do you want to do that? Was there a part of the documentation that seemed to suggest it? Since JuMP variables are not `Float64`, this will lead to errors instead of making your code more efficient.

Thank you.

This is because that I want to use complex functions that were originally created for other purposes as the objective function.

So to apply a complex function that I really want to apply to JuMP, I would have to completely re-create the new function.

Reading the source, it appears that VariableRef consists of the following structures.
I assume that somewhere, the actual numbers must be stored and if they can be extracted anyway, the problem will be solved.

``````struct GenericVariableRef{T} <: AbstractVariableRef
model::GenericModel{T}
index::MOI.VariableIndex
end
``````

Unfortunately not, the whole point is that your function should handle generic `VariableRef` objects instead of trying to turn them into `Float64`. For all intents and purposes, these objects behave just like numbers, so the only challenge is to write sufficiently generic code.

Do you have access to said code? Can you give an example?

I understand.
Unfortunately, my program seems difficult to support JuMP.

However, since I are currently able to support NLOPT.jl and others, I will use other libraries.

Sorry, but the function is currently not available to the public.
The content is large and complex because it uses solutions to partial differential equations.