Deprecated Modules in Julia

Hello,

I am trying to run the following two lines and I am having giant errors:

using DifferentialEquations, Flux, DiffEqFlux, Optim,SciMLSensitivity , Plots, OrdinaryDiffEq, Zygote, StaticArrays, LinearAlgebra, BenchmarkTools, PaddedViews, LaTeXStrings, PGFPlotsX, PlotThemes, ApproxFun
Ωp_nn = FastChain(FastDense(1,32), FastDense(32,32,tanh), FastDense(32,32,tanh), FastDense(32,2))
θ_nn = initial_params(Ωp_nn);
optimized_sol_nn = DiffEqFlux.sciml_train(p -> cost_adjoint_nn(p, 0.08), θ_nn, RADAM(0.003), maxiters = 1000)

For the FastChain, this is the following error message:

┌ Warning: FastChain is being deprecated in favor of Lux.jl. Lux.jl uses functions with explicit parameters f(u,p) like FastChain, but is fully featured and documented machine learning library. See the Lux.jl documentation for more details.
└ @ DiffEqFlux ~/.julia/packages/DiffEqFlux/jHIee/src/fast_layers.jl:9

For the scmil_train, this is the following message:

MethodError: no method matching default_relstep(::Nothing, ::Type{ComplexF64})
Closest candidates are:
default_relstep(::Type, ::Any) at ~/.julia/packages/FiniteDiff/40JnL/src/epsilons.jl:25
default_relstep(::Val{fdtype}, ::Type{T}) where {fdtype, T<:Number} at ~/.julia/packages/FiniteDiff/40JnL/src/epsilons.jl:26

Stacktrace:
[1] finite_difference_jacobian!(J::Matrix{ComplexF64}, f::Function, x::Matrix{ComplexF64}, fdtype::Nothing, returntype::Type, f_in::Nothing) (repeats 2 times)
@ FiniteDiff ~/.julia/packages/FiniteDiff/40JnL/src/jacobians.jl:298
[2] jacobian!(J::Matrix{ComplexF64}, f::Function, x::Matrix{ComplexF64}, fx::Nothing, alg::InterpolatingAdjoint{0, false, Val{:central}, Bool}, jac_config::Nothing)
@ SciMLSensitivity ~/.julia/packages/SciMLSensitivity/Wb65g/src/derivative_wrappers.jl:150
[3] _vecjacobian!(dλ::SubArray{ComplexF64, 1, Vector{ComplexF64}, Tuple{UnitRange{Int64}}, true}, y::Matrix{ComplexF64}, λ::SubArray{ComplexF64, 1, Vector{ComplexF64}, Tuple{UnitRange{Int64}}, true}, p::Vector{ComplexF64}, t::Float64, S::SciMLSensitivity.ODEInterpolatingAdjointSensitivityFunction{SciMLSensitivity.AdjointDiffCache{SciMLBase.UDerivativeWrapper{ODEFunction{false, SciMLBase.AutoSpecialize, typeof(schrodinger_nn), UniformScaling{Bool}, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, typeof(SciMLBase.DEFAULT_OBSERVED), Nothing, Nothing}, Float64, Vector{ComplexF64}}, SciMLSensitivity.ParamGradientWrapper{ODEFunction{false, SciMLBase.AutoSpecialize, typeof(schrodinger_nn), UniformScaling{Bool}, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, typeof(SciMLBase.DEFAULT_OBSERVED), Nothing, Nothing}, Float64, Matrix{ComplexF64}}, Nothing, Matrix{ComplexF64}, Matrix{ComplexF64}, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Base.OneTo{Int64}, UnitRange{Int64}, UniformScaling{Bool}}, InterpolatingAdjoint{0, false, Val{:central}, Bool}, Matrix{ComplexF64}, ODESolution{ComplexF64, 3, Vector{Matrix{ComplexF64}}, Nothing, Nothing, Vector{Float64}, Vector{Vector{Matrix{ComplexF64}}}, ODEProblem{Matrix{ComplexF64}, Tuple{Float64, Float64}, false, Vector{ComplexF64}, ODEFunction{false, SciMLBase.AutoSpecialize, typeof(schrodinger_nn), UniformScaling{Bool}, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, typeof(SciMLBase.DEFAULT_OBSERVED), Nothing, Nothing}, Base.Pairs{Symbol, Union{}, Tuple{}, NamedTuple{(), Tuple{}}}, SciMLBase.StandardODEProblem}, BS5{typeof(OrdinaryDiffEq.trivial_limiter!), typeof(OrdinaryDiffEq.trivial_limiter!), Static.False}, OrdinaryDiffEq.InterpolationData{ODEFunction{false, SciMLBase.AutoSpecialize, typeof(schrodinger_nn), UniformScaling{Bool}, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, typeof(SciMLBase.DEFAULT_OBSERVED), Nothing, Nothing}, Vector{Matrix{ComplexF64}}, Vector{Float64}, Vector{Vector{Matrix{ComplexF64}}}, OrdinaryDiffEq.BS5ConstantCache{Float64, Float64}}, DiffEqBase.DEStats, Nothing}, Nothing, ODEProblem{Matrix{ComplexF64}, Tuple{Float64, Float64}, false, Vector{ComplexF64}, ODEFunction{false, SciMLBase.AutoSpecialize, typeof(schrodinger_nn), UniformScaling{Bool}, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, typeof(SciMLBase.DEFAULT_OBSERVED), Nothing, Nothing}, Base.Pairs{Symbol, Union{}, Tuple{}, NamedTuple{(), Tuple{}}}, SciMLBase.StandardODEProblem}, ODEFunction{false, SciMLBase.AutoSpecialize, typeof(schrodinger_nn), UniformScaling{Bool}, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, typeof(SciMLBase.DEFAULT_OBSERVED), Nothing, Nothing}}, isautojacvec::Bool, dgrad::SubArray{ComplexF64, 1, Vector{ComplexF64}, Tuple{UnitRange{Int64}}, true}, dy::Nothing, W::Nothing)
@ SciMLSensitivity ~/.julia/packages/SciMLSensitivity/Wb65g/src/derivative_wrappers.jl:262
[4] vecjacobian!(dλ::SubArray{ComplexF64, 1, Vector{ComplexF64}, Tuple{UnitRange{Int64}}, true}, y::Matrix{ComplexF64}, λ::SubArray{ComplexF64, 1, Vector{ComplexF64}, Tuple{UnitRange{Int64}}, true}, p::Vector{ComplexF64}, t::Float64, S::SciMLSensitivity.ODEInterpolatingAdjointSensitivityFunction{SciMLSensitivity.AdjointDiffCache{SciMLBase.UDerivativeWrapper{ODEFunction{false, SciMLBase.AutoSpecialize, typeof(schrodinger_nn), UniformScaling{Bool}, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, typeof(SciMLBase.DEFAULT_OBSERVED), Nothing, Nothing}, Float64, Vector{ComplexF64}}, SciMLSensitivity.ParamGradientWrapper{ODEFunction{false, SciMLBase.AutoSpecialize, typeof(schrodinger_nn), UniformScaling{Bool}, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, typeof(SciMLBase.DEFAULT_OBSERVED), Nothing, Nothing}, Float64, Matrix{ComplexF64}}, Nothing, Matrix{ComplexF64}, Matrix{ComplexF64}, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Base.OneTo{Int64}, UnitRange{Int64}, UniformScaling{Bool}}, InterpolatingAdjoint{0, false, Val{:central}, Bool}, Matrix{ComplexF64}, ODESolution{ComplexF64, 3, Vector{Matrix{ComplexF64}}, Nothing, Nothing, Vector{Float64}, Vector{Vector{Matrix{ComplexF64}}}, ODEProblem{Matrix{ComplexF64}, Tuple{Float64, Float64}, false, Vector{ComplexF64}, ODEFunction{false, SciMLBase.AutoSpecialize, typeof(schrodinger_nn), UniformScaling{Bool}, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, typeof(SciMLBase.DEFAULT_OBSERVED), Nothing, Nothing}, Base.Pairs{Symbol, Union{}, Tuple{}, NamedTuple{(), Tuple{}}}, SciMLBase.StandardODEProblem}, BS5{typeof(OrdinaryDiffEq.trivial_limiter!), typeof(OrdinaryDiffEq.trivial_limiter!), Static.False}, OrdinaryDiffEq.InterpolationData{ODEFunction{false, SciMLBase.AutoSpecialize, typeof(schrodinger_nn), UniformScaling{Bool}, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, typeof(SciMLBase.DEFAULT_OBSERVED), Nothing, Nothing}, Vector{Matrix{ComplexF64}}, Vector{Float64}, Vector{Vector{Matrix{ComplexF64}}}, OrdinaryDiffEq.BS5ConstantCache{Float64, Float64}}, DiffEqBase.DEStats, Nothing}, Nothing, ODEProblem{Matrix{ComplexF64}, Tuple{Float64, Float64}, false, Vector{ComplexF64}, ODEFunction{false, SciMLBase.AutoSpecialize, typeof(schrodinger_nn), UniformScaling{Bool}, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, typeof(SciMLBase.DEFAULT_OBSERVED), Nothing, Nothing}, Base.Pairs{Symbol, Union{}, Tuple{}, NamedTuple{(), Tuple{}}}, SciMLBase.StandardODEProblem}, ODEFunction{false, SciMLBase.AutoSpecialize, typeof(schrodinger_nn), UniformScaling{Bool}, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, typeof(SciMLBase.DEFAULT_OBSERVED), Nothing, Nothing}}; dgrad::SubArray{ComplexF64, 1, Vector{ComplexF64}, Tuple{UnitRange{Int64}}, true}, dy::Nothing, W::Nothing)
@ SciMLSensitivity ~/.julia/packages/SciMLSensitivity/Wb65g/src/derivative_wrappers.jl:224

I can understand for the FirstChain, I need to change the package but what is the problem of scmil_train?
Does anyone know how to manage with this error?
Thanks in advance

I tried to run your example in a clean environment using Julia 1.8.5 on Linux:

mkdir Train
cd Train
julia --project="."

then in Julia:

]
add DifferentialEquations, Flux, DiffEqFlux, Optim,SciMLSensitivity , Plots, OrdinaryDiffEq, Zygote, StaticArrays, LinearAlgebra, BenchmarkTools, PaddedViews, LaTeXStrings, PGFPlotsX, PlotThemes, ApproxFun

and then executing your code, which I put in the file train.jl.

I get a different error:

julia> include("train.jl")
┌ Warning: FastChain is being deprecated in favor of Lux.jl. Lux.jl uses functions with explicit parameters f(u,p) like FastChain, but is fully featured and documented machine learning library. See the Lux.jl documentation for more details.
└ @ DiffEqFlux ~/.julia/packages/DiffEqFlux/jHIee/src/fast_layers.jl:9
┌ Warning: sciml_train is being deprecated in favor of direct usage of Optimization.jl. Please consult the Optimization.jl documentation for more details. Optimization.jl's PolyOpt solver is the polyalgorithm of sciml_train
└ @ DiffEqFlux ~/.julia/packages/DiffEqFlux/jHIee/src/train.jl:6
ERROR: LoadError: UndefVarError: cost_adjoint_nn not defined

To be honest, I do not see where you defined the function cost_adjoint_nn.

Hey, sorry my bad,

Here is the entire code:

using DifferentialEquations, Flux, DiffEqFlux, Optim,SciMLSensitivity , Plots, OrdinaryDiffEq, Zygote, StaticArrays, LinearAlgebra, BenchmarkTools, PaddedViews, LaTeXStrings, PGFPlotsX, PlotThemes, ApproxFun
pgfplotsx();
Plots.PGFPlotsXBackend();
const σ0 = Hermitian(Complex{Float64}[1 0; 0 1]);
const σx = Hermitian(Complex{Float64}[0 1; 1 0]);
const σy = Hermitian(Complex{Float64}[0 -im; im 0]);
const σz = Hermitian(Complex{Float64}[1 0; 0 -1]);
Ωp_nn = FastChain(FastDense(1,32), FastDense(32,32,tanh), FastDense(32,32,tanh), FastDense(32,2))
θ_nn = initial_params(Ωp_nn);
const β = 2π*0.2;

const tol = 1e-7;

const Hϵ = Hermitian(Complex{Float64}[1 0; 0 -1;]);


const T = 6.0;
tspan = (0.0, T);

const θ = π/2;
const Utarget = cos(θ/2)*σ0 + sin(θ/2)*im*σz;
    
const steepness = 20*T;
smooth_square_envelope(t) = coth(steepness/4)*( tanh(steepness*t/(4*T)) - tanh(steepness*(t-T)/(4*T)) ) - 1;

function schrodinger_nn(u, p, t)
    @views @inbounds U = u[1:2,1:2];
    @views @inbounds ℰ = u[3:4,1:2];
    envelope = smooth_square_envelope(t);
    nn_output = Ωp_nn([t/T],p)
    @inbounds Ω = envelope*( nn_output[1]*sin(nn_output[2]) )
    @inbounds H = Hermitian([β Ω; Ω -β]);
    local dℰ = Hermitian(U'*Hϵ*U);
    return [-im*H*U; dℰ; dℰ*ℰ - ℰ*dℰ]; # 1/2 of (dℰ*ℰ - ℰ*dℰ)/2 is in cost_adjoint_nn
end

const u0 = Complex{Float64}[1 0; 0 1; 0 0; 0 0; 0 0; 0 0];

ode_nn = ODEProblem(schrodinger_nn, u0, tspan, θ);

function callback(p, cost)
    return cost < 1e-7
end


function cost_adjoint_nn(p, w=1.0)
    ode_sol = solve(ode_nn, BS5(), p=Complex{Float64}.(p), abstol=tol, reltol=tol)
    usol = last(ode_sol)
    @views @inbounds Ugate = usol[1:2,1:2];
    @views @inbounds ℰ = usol[3:4, 1:2];
    @views @inbounds ℰ2 = usol[5:6, 1:2];

    loss = abs(1.0 - abs(tr(Ugate*Utarget')/2)^2) + w^2*(norm(ℰ)/2)^2 + 4*w^4*(norm(ℰ2)/4)^2

    return loss
end
  optimized_sol_nn = DiffEqFlux.sciml_train(p -> cost_adjoint_nn(p, 0.08), θ_nn, RADAM(0.003), maxiters = 1000)
optimized_sol_nn2 = DiffEqFlux.sciml_train(p -> cost_adjoint_nn(p, 0.08), optimized_sol_nn.minimizer, BFGS(initial_stepnorm=0.001), maxiters = 1000, allow_f_increases = true)
nn_solution = optimized_sol_nn2.minimizer;
function Ωsol(t,p)
    envelope = smooth_square_envelope(t);
    nn_output = Ωp_nn([t/T],p);
    @inbounds Ω = envelope*nn_output[1]*sin(nn_output[2])
    return Ω
end

const fonstsize = 24;
theme(:default)

# β = J/4 and Ω = g μB B_x/2, so g μB B_x/J == Ω/(2β)
Plots.plot(τ -> Ωsol(τ / (4*β/(2π)), nn_solution)/(2*β), 0, T*(4*β)/(2π), label = L"g\mu_B B_x(t)/J", xlabel = L"t J/h", xtickfont=font(fonstsize-0), ytickfont=font(fonstsize-0), guidefont=font(fonstsize-0), legendfont=font(fonstsize-0), lw=2, palette = :tab10, legend = (0.19,0.99))
Plots.savefig("dqd-pulse-2nd-order.pdf")

Ok, now I get the error:

julia> include("train.jl")
┌ Warning: FastChain is being deprecated in favor of Lux.jl. Lux.jl uses functions with explicit parameters f(u,p) like FastChain, but is fully featured and documented machine learning library. See the Lux.jl documentation for more details.
└ @ DiffEqFlux ~/.julia/packages/DiffEqFlux/jHIee/src/fast_layers.jl:9
┌ Warning: sciml_train is being deprecated in favor of direct usage of Optimization.jl. Please consult the Optimization.jl documentation for more details. Optimization.jl's PolyOpt solver is the polyalgorithm of sciml_train
└ @ DiffEqFlux ~/.julia/packages/DiffEqFlux/jHIee/src/train.jl:6
┌ Warning: Reverse-Mode AD VJP choices all failed. Falling back to numerical VJPs
└ @ SciMLSensitivity ~/.julia/packages/SciMLSensitivity/Wb65g/src/concrete_solve.jl:92
ERROR: LoadError: MethodError: no method matching default_relstep(::Nothing, ::Type{ComplexF64})

For me it looks as if you try to apply a method that is only defined for real values on a complex value…

You might also look at this warning:

Warning: Reverse-Mode AD VJP choices all failed. Falling back to numerical VJPs

and try to fix it…

Ok, for the first error, I have figured that out. I basically changed the line as follows:

Ωp_nn = Lux.Chain(Lux.Dense(1,32), Lux.Dense(32,32,tanh), Lux.Dense(32,32,tanh), Lux.Dense(32,2))

However, for the second error I am still struggling.

These were deprecated so long ago (1-2 years ago?) that the release without the deprecations will be coming out soon. If you wrote the code awhile back, you can continue to use the old versions of course by instantiating the manifest.

To update FastChain use Lux and sciml_train use Optimization.jl. The tutorials all use this flow, so I would recommend checking out one of the tutorials like