The function to optimize in JuMP

The problem is bullet number 3. See e.g. https://github.com/JuliaDiff/ForwardDiff.jl/issues/136#issuecomment-237941790

I just want to be sure I really understand this error because I’ve seen it in another context now. Use of println(typeof(.)) statements tell me that each element of y is float64 after it is modified as well as the initial vector being Array{Float64,2}. So I’m not really assigning the same name to data of different types the way you warn me about, right? It’s the dual numbers required for autodiff that make those elements of y into a different type that somehow could not be converted back to Float64 and caused a conversion error.

Is that because Julia needs to convert the vector y to have the same number of dimensions as the scalars alpha and beta for the autodiff?

If your Array{Float64,N} is supposed to admit some dual numbers throughout the autodiff, then that will give you the same error. When this happens, you need to go back and see which arrays the dual numbers will be touching and initialize those using typeof(some_arg) as the element type of the array instead of some specific number type like Float64 or Int. Type stability will make your code (a lot) faster, but it’s actually kind of different from the source of this error, so I probably should have made that point clearer.

This sentence was unclear. “When this happens, you need to go back and see which arrays the dual numbers will be touching and initialize those using typeof(some_arg) as the element type of the array instead of some specific number type…”

So if some matrix M will be perturbed, directly or indirectly, by dual numbers, the matrix M needs to be initialized to be of the same type as the parameters that determine the values of the elements inside M? At least if those parameters that determine M are variables that the function is maximized over and therefore perturbed by dual numbers?

Correct, that’s what I meant!