ODEFunction throwing "ERROR: LoadError: mul_float: types of a and b must match"

What does the following error mean: “ERROR: LoadError: mul_float: types of a and b must match”?
I am running a code that repeatedly calls a function that creates an ODE problem and solves it. The function is called in a for loop, at each called only one parameter in the ODE changes.
I get the error message most of the time that I run the code. The unusual thing is that the error is raised at different iterations of the for loop, usually after 10 or so successful executions of the function. Sometimes the error is raised at the first or second for loop.
I would appreciate any help understanding what is raising the error and how to fix it.
Please find below the abridged error trace

ERROR: LoadError: mul_float: types of a and b must match
Stacktrace:
[1] *
@ ./float.jl:332 [inlined]
[2] macro expansion
@ ~/.julia/packages/SymbolicUtils/9iQGH/src/code.jl:325 [inlined]
[3] macro expansion
@ ~/.julia/packages/Symbolics/Kybuv/src/build_function.jl:331 [inlined]
[4] macro expansion
@ ~/.julia/packages/SymbolicUtils/9iQGH/src/code.jl:283 [inlined]
[5] macro expansion
@ ~/.julia/packages/RuntimeGeneratedFunctions/3SZ1T/src/RuntimeGeneratedFunctions.jl:124 [inlined]
[6] macro expansion
@ ./none:0 [inlined]
[7] generated_callfunc
@ ./none:0 [inlined]
[8] (::RuntimeGeneratedFunctions.RuntimeGeneratedFunction{(Symbol(“##out#2420”), Symbol(“##arg#2418”), Symbol(“##arg#2419”), :t), ModelingToolkit.var"#_RGF_ModTag", ModelingToolkit.var"#_RGF_ModTag", (0x24c64a68, 0xc35b875a, 0x4240b981, 0x23f2fd06, 0xd99ac7f8)})(::Vector{Float64}, ::Vector{Float64}, ::Vector{Float64}, ::Float64)
@ RuntimeGeneratedFunctions ~/.julia/packages/RuntimeGeneratedFunctions/3SZ1T/src/RuntimeGeneratedFunctions.jl:112
[9] f
@ ~/.julia/packages/ModelingToolkit/tBDYj/src/systems/diffeqs/abstractodesystem.jl:168 [inlined]
[10] ODEFunction
@ ~/.julia/packages/SciMLBase/XuLdB/src/scimlfunctions.jl:334 [inlined]
[11] initialize!(integrator::OrdinaryDiffEq.ODEIntegrator{ […]
@ OrdinaryDiffEq ~/.julia/packages/OrdinaryDiffEq/vxMSM/src/perform_step/low_order_rk_perform_step.jl:623
[12] initialize!(integrator::OrdinaryDiffEq.ODEIntegrator{. […]
@ OrdinaryDiffEq ~/.julia/packages/OrdinaryDiffEq/vxMSM/src/perform_step/composite_perform_step.jl:39
[13] __init(prob::ODEProblem{Vector{Float64}, […]
@ OrdinaryDiffEq ~/.julia/packages/OrdinaryDiffEq/vxMSM/src/solve.jl:433
[14] __solve(::ODEProblem{Vector{Float64}, Tuple{Float64, Float64}, true, Vector{Float64}, ODEFunction{true, ModelingToolkit.var"#f#148"{RuntimeGeneratedFunctions.RuntimeGeneratedFunction{(Symbol(“##arg#2418”), Symbol(“##arg#2419”), :t), ModelingToolkit.var"#_RGF_ModTag", ModelingToolkit.var"#_RGF_ModTag", (0x035432df, 0xab964ebf, 0xc3bbf1d1, 0x5aec0fd7, 0x3e286ce7)}, RuntimeGeneratedFunctions.RuntimeGeneratedFunction{(Symbol(“##out#2420”), Symbol(“##arg#2418”), Symbol(“##arg#2419”), :t), ModelingToolkit.var"#_RGF_ModTag", ModelingToolkit.var"#_RGF_ModTag", (0x24c64a68, 0xc35b875a, 0x4240b981, 0x23f2fd06, 0xd99ac7f8)}}, LinearAlgebra.UniformScaling{Bool}, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Vector{Symbol}, Symbol, ModelingToolkit.var"#120#generated_observed#155"{Bool, ODESystem, Dict{Any, Any}}, Nothing}, Base.Iterators.Pairs{Union{}, Union{}, Tuple{}, NamedTuple{(), Tuple{}}}, SciMLBase.StandardODEProblem}, ::CompositeAlgorithm{Tuple{Tsit5, TRBDF2{0, true, DefaultLinSolve, NLNewton{Rational{Int64}, Rational{Int64}, Rational{Int64}}, DataType}}, AutoSwitch{Tsit5, TRBDF2{0, true, DefaultLinSolve, NLNewton{Rational{Int64}, Rational{Int64}, Rational{Int64}}, DataType}, Rational{Int64}, Int64}}; kwargs::Base.Iterators.Pairs{Symbol, Vector{Float64}, Tuple{Symbol}, NamedTuple{(:saveat,), Tuple{Vector{Float64}}}})
@ OrdinaryDiffEq ~/.julia/packages/OrdinaryDiffEq/vxMSM/src/solve.jl:4
[15] #solve_call#56
@ ~/.julia/packages/DiffEqBase/U3Zj7/src/solve.jl:61 [inlined]
[16] #solve_up#58
@ ~/.julia/packages/DiffEqBase/U3Zj7/src/solve.jl:82 [inlined]
[17] #solve#57
@ ~/.julia/packages/DiffEqBase/U3Zj7/src/solve.jl:70 [inlined]
[18] simFFFB(N::Int64, r::Reference{var"#refSin#5"{Vector{Float64}}, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}}}, ref::ODESystem, pid::ODESystem, lp::ODESystem, numIndex::Int64, W0::Matrix{Float64})

Can you share your example?

Hi ChrisRackauckas,
thanks for the response. I don’t really have a minimal example but you can find the whole code here https://github.com/adrianaprotondo/CerebellumLearning/blob/d6234c1e0324880ac8cdd38be777755f771dc98a/scripts/lossVsSize.jl
in this github project directory
https://github.com/adrianaprotondo/CerebellumLearning.git

It looks like you’re mixing Float64 and Float32: what if you make that all consistent?

Thanks for the suggestion! I changed all the Array definition to Float64 and still get the same error. Do you have any other suggestions?

I can’t say I’ll have time to dig into your example, but maybe @shashi can take a look?