NL Model Issue during forward diff

I’m still new to Julia and JuMP, so this might be an easy one.

Setup: I have some complex business logic that takes in some planned changes in a DataFrame, and returns a DataFrame with projected output and error variables. I want to put this into an optimization.

model = Model(optimizer_with_attributes(Ipopt.Optimizer, "print_level"=>3))

@variable(model, max_val >= plan[1:nPotential] >= 0.)

function ssePlan(plan...)
    global _plan, p
    p = collect(plan)
    _plan = DataFrame(:ID => planTable.ID, :new_plan => p)
    _planAggTable = evalPlan(currentPlan,planTable=_plan)
    e1 = _planAggTable.E1
    e2 = _planAggTable.E2
    sse = e1'*e1 + e2'*e2
    return(sse)
end

_temp = ssePlan(planTable.new_plan...)

register(model,:ssePlanFn,nPotential,ssePlan,autodiff=true)

@NLobjective(model,Min,ssePlanFn(plan...))

optimize!(model)

evalPlan() contains the business logic.

I get the following error on the last line:

MethodError: no method matching Float64(::ForwardDiff.Dual{ForwardDiff.Tag{JuMP.var"#107#109"{typeof(ssePlan)},Float64},Float64,8})

Closest candidates are:
Float64(::Real, !Matched::RoundingMode) where T<:AbstractFloat at rounding.jl:200
Float64(::T) where T<:Number at boot.jl:718
Float64(!Matched::Int8) at float.jl:60

The stack trace shows this occurring deep in the business logic.

I put the global statement into the objective function so I could see what was being passed in.

julia> _plan
16×2 DataFrame
│ Row │ ID     │ commitment                                                                                              │
│     │ Int64  │ ForwardDiff.Dual{ForwardDiff.Tag{JuMP.var"#107#109"{typeof(ssePlan)},Float64},Float64,8}                │
├─────┼────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────┤
│ 1   │ -11100 │ Dual{ForwardDiff.Tag{JuMP.var"#107#109"{typeof(ssePlan)},Float64}}(0.0,1.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0) │
⋮
│ 15  │ 82     │ Dual{ForwardDiff.Tag{JuMP.var"#107#109"{typeof(ssePlan)},Float64}}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0) │
│ 16  │ 83     │ Dual{ForwardDiff.Tag{JuMP.var"#107#109"{typeof(ssePlan)},Float64}}(0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0) │

I’m confused. I thought the values being passed to the function through the model variables plan[1:16] would be single values.

What am I doing wrong here?

Solved it myself after some digging.

There was a @byrow! from DataFramesMeta. A new column was being explicitly created as Float64. Changed to Any

        @byrow! aggTable begin
            # @newcol cash::Array{Float64} <-- Old code
            @newcol cash::Array{Any}