"InexactError" when calling Optim

I’m writing a program to perform parameter estimation on a system of ODEs, and I keep getting this weird “InexactError” that I’ve spent hours unsuccessfully trying to figure out. Here is my call to the optimizer which is producing the error:

df = TwiceDifferentiable(objective, x_init, autodiff=:forward)
    inner_optimizer = GradientDescent()
    res = Optim.optimize(df, LBs_scaled, UBs_scaled, x_init, 
                         Fminbox(inner_optimizer), autodiff=:forward,
                         Optim.Options(show_trace = true, allow_f_increases = true))

And here is the stack trace:

InexactError: Int64(40.98658290664029)


Stacktrace:
 [1] Int64 at .\float.jl:710 [inlined]
 [2] convert(::Type{Int64}, ::Float64) at .\number.jl:7
 [3] setproperty!(::Optim.BarrierWrapper{OnceDifferentiable{Int64,Array{Float64,1},Array{Real,1}},Optim.BoxBarrier{Array{Float64,1},Array{Float64,1}},Int64,Int64,Array{Float64,1}}, ::Symbol, ::Float64) at .\Base.jl:34
 [4] value_gradient!!(::Optim.BarrierWrapper{OnceDifferentiable{Int64,Array{Float64,1},Array{Real,1}},Optim.BoxBarrier{Array{Float64,1},Array{Float64,1}},Int64,Int64,Array{Float64,1}}, ::Array{Real,1}) at C:\Users\Michael\.julia\packages\Optim\CK6Dn\src\multivariate\solvers\constrained\fminbox.jl:76
 [5] initial_state(::GradientDescent{LineSearches.InitialPrevious{Float64},LineSearches.HagerZhang{Float64,Base.RefValue{Bool}},Optim.InverseDiagonal,Optim.var"#64#65"{Array{Float64,1},Array{Float64,1},Fminbox{GradientDescent{LineSearches.InitialPrevious{Float64},LineSearches.HagerZhang{Float64,Base.RefValue{Bool}},Nothing,Optim.var"#11#13"},Float64,Optim.var"#47#49"},Optim.BarrierWrapper{OnceDifferentiable{Int64,Array{Float64,1},Array{Real,1}},Optim.BoxBarrier{Array{Float64,1},Array{Float64,1}},Int64,Int64,Array{Float64,1}}}}, ::Optim.Options{Float64,Nothing}, ::Optim.BarrierWrapper{OnceDifferentiable{Int64,Array{Float64,1},Array{Real,1}},Optim.BoxBarrier{Array{Float64,1},Array{Float64,1}},Int64,Int64,Array{Float64,1}}, ::Array{Real,1}) at C:\Users\Michael\.julia\packages\Optim\CK6Dn\src\multivariate\solvers\first_order\gradient_descent.jl:57
 [6] optimize(::OnceDifferentiable{Int64,Array{Float64,1},Array{Real,1}}, ::Array{Float64,1}, ::Array{Float64,1}, ::Array{Real,1}, ::Fminbox{GradientDescent{LineSearches.InitialPrevious{Float64},LineSearches.HagerZhang{Float64,Base.RefValue{Bool}},Nothing,Optim.var"#11#13"},Float64,Optim.var"#47#49"}, ::Optim.Options{Float64,Nothing}) at C:\Users\Michael\.julia\packages\Optim\CK6Dn\src\multivariate\solvers\constrained\fminbox.jl:322
 [7] optimize(::TwiceDifferentiable{Int64,Array{Float64,1},Array{Float64,2},Array{Real,1}}, ::Array{Float64,1}, ::Array{Float64,1}, ::Array{Real,1}, ::Fminbox{GradientDescent{LineSearches.InitialPrevious{Float64},LineSearches.HagerZhang{Float64,Base.RefValue{Bool}},Nothing,Optim.var"#11#13"},Float64,Optim.var"#47#49"}, ::Optim.Options{Float64,Nothing}; inplace::Bool, autodiff::Symbol) at C:\Users\Michael\.julia\packages\Optim\CK6Dn\src\multivariate\solvers\constrained\fminbox.jl:269
 [8] ModelFit(; ODE_model::Function, ODE_vars::Array{String,1}, vars_to_fit::Array{String,1}, paramIC_dict::Dict{Symbol,Any}, data_dict::Dict{Symbol,String}, f_calc_ICs::typeof(f_ICs), N0::Int64, date_range::Array{String,1}, plot_until::String, date_format::String, norm::typeof(mean_square_error), norm_scale::Int64, integrator_options::Dict{Symbol,Any}, optimizer_options::Dict{Symbol,Any}) at .\In[635]:260
 [9] top-level scope at In[636]:10
 [10] include_string(::Function, ::Module, ::String, ::String) at .\loading.jl:1091

My objective function makes a call to DifferentialEquations.solve(), and I noticed that I get a very similar “InexactError” mesage when I pass a tuple of Int64’s to the t_span argument.

I was also thinking that this error might be related to the error I was getting in this post, https://discourse.julialang.org/t/forwarddiff-no-method-matching-error/53311, where I learned that the objective or “target” function must be generic enough to accept numbers which are of type Real, or arrays of Real numbers (see ForwardDiff limitations). However, even when I remove the autodiff = :forward argument and change to a gradient-free optimizer, I still get the same error. So perhaps it’s not related…

The full code is rather long (250+ lines), but here is the code for the objective function itself (note that it involves arguments which are defined in an outer scope that isn’t shown):

function objective(x::Vector)
               
       #Unpack scaled params & IC ratios being optimized
       params_opt_scaled = x[1:num_params_opt] 
       IC_ratios_opt_scaled = x[num_params_opt+1:end]
          
       #Scale the optimized quantities back to their original sizes 
       params_opt = params_opt_scaled .* param_UBs
       IC_ratios_opt = IC_ratios_opt_scaled .*IC_UBs
        
       #Create vectors to store ODE_params and ICs/IC_ratios
       ODE_params = Array{Real}(undef,num_params)
       ICs_and_IC_ratios = Array{Real}(undef,num_ICs_and_IC_ratios)

       #Populate ODE_params
       ODE_params[param_opt_indices] = params_opt
       ODE_params[param_fix_indices] = params_fix
           
       #Populate ICs_and_IC_ratios
       ICs_and_IC_ratios[IC_opt_indices] = IC_ratios_opt
       ICs_and_IC_ratios[IC_fix_indices] = ICs_fix
        
       #Calculate the ICs from the IC ratios 
       ODE_ICs = f_calc_ICs(ICs_and_IC_ratios); 

       #Scale the Initial Condition 
       ODE_ICs_scaled = ODE_ICs ./ N0;
           
       #Now we solve the ODE system:
       t_span = 1.0* [t_fit[1],t_fit[end]];  
  
       #Solve the ODE system and get the solution for t in t_fit
       ODE_prob = ODEProblem(ODE_model, ODE_ICs_scaled, t_span, ODE_params);
       sol = solve(ODE_prob, Tsit5(),reltol = 1e-10, abstol = 1e-10, saveat = t_fit);
        
       sol = DataFrame(sol', ODE_vars);
     
       norm_sum = 0.0
       for var in vars_to_fit
            var_pred = sol[:,var]
            var_obs = data_obs_fit_scaled[:,var]
            norm_sum = norm_sum + norm(var_pred,var_obs)
       end
        
       mean_norm = norm_sum/length(vars_to_fit) 
       return mean_norm
    end 

The call to the optimizer follows (code shown at the top of the post).

This error has been driving me crazy, so any help or suggestions would be greatly appreciated!

That

and similar looks fishy, I am assuming that it is incorrectly typed and should be Float64 for that parameter.

Definitely worth reporting an an issue with an MWE.

Hmm, that does look strange. In the code you linked, it appears that the first argument of the OnceDifferentiable and TwiceDifferentiable constructors is the objective function (f), so how could Int64 possibly appear there?

I inserted some code in objective to print the type of a bunch of arguments, and here is what I got:

The type of 'x_init' is Array{Real,1}
The type of 'fixed_args' isArray{Real,1}
The type of 'ODE_params' is Array{Real,1}
The type of 'ICs_and_IC_ratios' is Array{Real,1}
The type of 'ODE_ICs' is Array{Float64,1}
The type of 'ODE_ICs_scaled' is Array{Float64,1}
The type of 't_span' is Array{Float64,1}
The type of 'sol' is ODESolution{Float64,2,Array{Array{Float64,1},1},Nothing,Nothing,Array{Float64,1},Array{Array{Array{Float64,1},1},1},ODEProblem{Array{Float64,1},Tuple{Float64,Float64},true,Array{Real,1},ODEFunction{true,typeof(One_Age_Model),UniformScaling{Bool},Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing},Base.Iterators.Pairs{Union{},Union{},Tuple{},NamedTuple{(),Tuple{}}},DiffEqBase.StandardODEProblem},Tsit5,OrdinaryDiffEq.InterpolationData{ODEFunction{true,typeof(One_Age_Model),UniformScaling{Bool},Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing},Array{Array{Float64,1},1},Array{Float64,1},Array{Array{Array{Float64,1},1},1},OrdinaryDiffEq.Tsit5Cache{Array{Float64,1},Array{Float64,1},Array{Float64,1},OrdinaryDiffEq.Tsit5ConstantCache{Float64,Float64}}},DiffEqBase.DEStats}
The type of 'sol' is DataFrame
The type of 'var_pred' is Array{Float64,1}
The type of 'var_obs' is Array{Float64,1}
The type of 'norm_sum' is Float64
The type of 'mean_norm' is Float64
The type of 'objective(x_init)' is Float64

(Note: that type(sol) was called twice–before and after converting it into a DataFrame.) Does this shed any light on the problem? Should all of the Float64’s be changed to Real’s where possible?

I can try to come up with a MWE, but it’ll take me some time…

Update: It seems to be working now! Unfortunately I still don’t know what the problem was that I fixed…I will post an update here if I end up figuring out what it was that was causing the issue.