Error relating to Zygote

It is not in the ODE definition. It is my main script. At the top of the script, I have some include statements;
The full top-level script:

using DiffEqFlux, Flux, Optim, DifferentialEquations, LinearAlgebra, OrdinaryDiffEq, DelimitedFiles
using Optimization, OptimizationOptimisers, OptimizationOptimJL
using Zygote
using Plots
using DataFrames, CSV, YAML
using BSON: @save, @load
using Dates
#using PyPlot

# Need a global function
include("rude_functions.jl")
include("rude_impl.jl")

dct_params = Dict()
# There are 8 protocols for each value of ω0 and γ0. So if you have 4 pairs (ω0, γ0),
# you will have 8 protocols for each pair. (That is not what Sachin's problem proposes).
# So perhaps one has to generalize how the protocols are stored in the main program to run
# a wider variety of tests. Ask questions, Alex.
dct_params[:ω0] = [0.1, 1.0]
dct_params[:ω0] = [0.1]
dct_params[:γ0] = [0.1]
dct_params[:ω0] = [1.0] # original rude.jl
dct_params[:γ0] = [1.0] # original rude.jl

# Create a Dictionary with all parameters
dct = Dict()
# Set to lower number to run faster and perhaps get less accurate results
dct[:maxiters] = 200  # 200 was number in original code
# Set to 12 once the program works. More cycles, more oscillations
dct[:Ncycles] = 12.  # set to 1 while debugging
dct[:ω] = 1f0
# set to 1 to run faster. Set to 8 for more accurate results
dct[:nb_protocols] = 8
dct[:skip_factor] = 50
dct[:dct_giesekus] = Dict()
gie = dct[:dct_giesekus]
gie[:η0] = 1
gie[:α] = 0.8 # a change to this value propagates correctly to dct[:dct_giesekus][α]
gie[:τ] = 1.0
# Pay attention to references (Julia's version of pointers, but not a memory address). I am working with dictionaries of dictionaries
print(dct[:dct_giesekus][:α])
println(keys(dct)|>collect)

dct[:dct_NN] = Dict()
dNN = dct[:dct_NN]
dNN[:nb_hid_layers] = 2
dNN[:in_layer] = 9
dNN[:out_layer] = 9
dNN[:hid_layer] = 32
dNN[:act] = tanh
# ============== END DICTIONARY DEFINITIONS ==================

# Not a good idea to let global variables lying around
# Write dictionary to a database
dicts = []
run=0
for o in dct_params[:ω0]
    for g in dct_params[:γ0]
        global run += 1
        dct[:datetime] = now()
        dct[:run] = run
        #dct[:ω] = o   # changed on 2023-02-25_11:12. Will warnings appear. Yes. They appear (if both dct[:omega] and dct[:T] are changed at the same time
        #dct[:ω0] = o
        dct[:γ] = g
        #dct[:T] = (2. * π / o * dct[:Ncycles]  # changed on 2023-02-25_11:12. Will warnings appear. (change denominator to o, and comment dct[:omega]
        #dct[:T] = (2. * π / dct[:ω0]) * dct[:Ncycles]
        figure = single_run(dct)
        # deepcopy will make sure that the results is different than dct
        # Without it, the dictionaries saved in the list will all be the same
        push!(dicts, deepcopy(dct))
        fig_file_name = "plot_" * "run$(dct[:run]).pdf"
        savefig(figure, fig_file_name)
    end
end

I downloaded code (rude.jl) from github, and after a few changes, was able to run it. But it was in a single file. To allow running several cases overnight, I wanted to avoid the cost of starting up Julia, so I did what I do in Python: create a high-level script, put variables in dictionaries (I have similar issues with NamedTuples), and call a function single_run, passing global variables to it through a dictionary. Yes, dicitonaries are inefficient, but they are not used in the sections of the code that are time-consuming.

I am solving a UODE, where the neural network is a tensorbasis network (the NN generates the coefficients of the tensor basis, and the summation is a term in the differential equation. As I wrote in a previous post, in a different section of the code, I have the lines:

function xxx(...)
    # Run the integrity basis through a neural network
    model_inputs = [λ1;λ2;λ3;λ4;λ5;λ6;λ7;λ8;λ9]
    g1,g2,g3,g4,g5,g6,g7,g8,g9 = model_univ(model_inputs, model_weights)  # model_univ not found
    return 0., 0., 0., 0., 0., 0.   # Generates warning. 
end

If I comment out the call the neural network model_univ, the warning error disappears. But with the call, the warning error is present. Even if there are errors in the neural network, how can that impact anything, given that g1,g2,g3... are not used?

If the network is using dual numbers, then of course, the error is the neural model, which is defined as follows using Flux (even though Lux is recommended, Flux should work.):

    model_univ = FastChain(FastDense(dNN[:in_layer], hid, act),
                        FastDense(hid, hid, act),
                        FastDense(hid, dNN[:out_layer]))
    dct[:model_univ] = model_univ

You can see how the dictionary is used.