First of all, I’m quite new to Julia and I would like to apologize if the question is too naïve, probably product of my limited understanding of how Julia handles this type of algorithms. Currently, I’m using Julia version 1.10.1. My problem is the following: I need to estimate six parameters (c1, c2, c3, c4, c5, c6) of an ODE representing the aerodynamic model of a wind turbine, and I’ve decided to do it via Bayesian inference using physics-informed neural networks (PINN) with the package NeuralPDE.jl. I have not been able to find clear examples of how to do this in the presence of exogenous inputs (Tg, θ, vr) taken from data. I wrote the ODE in out-of-place form as the documentation states, and I seem to be able to solve it correctly. But when I try to solve the ODE using a Bayesian PINN for parameter estimation, I get an error that I’m unable to understand. I think the problem may be in the way I handle the exogenous inputs in the wind_turbine(u,p,t)
function, but I’m unsure about how to solve it. Any insights or guidance on how to solve this issue, or any other tips on Bayesian inference with PINNs in Julia, would be greatly appreciated. Thank you in advance!
See the code below:
using NeuralPDE, Lux, Plots, OrdinaryDiffEq, Distributions, Random, XLSX, Plots
# Define wind turbine aerodynamic model with exogenous inputs Tg, θ and vr
function wind_turbine(u, p, t)
# Index corresponding to current time step
idx = Int(floor((t - tspan[1]) / dt)) + 1
# Extract current input from data
Tg_t = Tg_data[idx]
θ_t = θ_data[idx]
vr_t = vr_data[idx]
# Extract model parameters
c1, c2, c3, c4, c5, c6 = p
# Extract current state
wr = u
# Define model constants
μd = 0.05
Rr = 120.998
ρ = 1.225
Ar = π * Rr^2
Jr = 321699000
Jg = 3.777e6
# Calculate tip-speed ratio λ
λ = wr * Rr / vr_t
# Calculate 1/λi (numerical approx. for Cp(θ,λ))
λi_inv = 1 / (λ + 0.08 * θ_t) - 0.035 / (θ_t^3 + 1)
# Calculate Cp (numerical approx. for Cp(θ,λ))
Cp = c1 * (c2 * λi_inv - c3 * θ_t - c4) * exp(-c5 * λi_inv) + c6 * λ
# Evaluate differential equation for rotor speed
dwr = ((1 - μd) * 1 / (2 * wr) * ρ * Ar * vr_t^3 * Cp - Tg_t) / (Jr + Jg)
return dwr
end
# Define initial-value problem.
wr0 = 0.8246
Tg0 = 19503192.0
vr0 = 13.9445
theta0 = 0.1743
u0 = wr0 # Initial state
p = [0.5176, 116, 0.4, 5, 21, 0.0068] # Model parameters (not suitable for this specific wind turbine, but my best guess for now)
tspan = (0.0, 5.0)
# Load data from Excel sheet
function load_data(filename)
data = XLSX.readxlsx(filename)["Ark1"] # Assuming your data is in Sheet1 (Ark1)
wr = data["A"][1:501] # Assuming column A contains angular speed data, take first 501 samples
Tg = data["B"][1:501] # Assuming column B contains generator torque data, take first 501 samples
θ = data["C"][1:501] # Assuming column C contains pitch angle data, take first 501 samples
vr = data["D"][1:501] # Assuming column D contains wind speed data, take first 501 samples
return wr, Tg, θ, vr
end
# Load data from Excel sheet
filename = "...\\data_14ms_cropped.xlsx" # write corresponding path
wr_data, Tg_data, θ_data, vr_data = load_data(filename)
t_data = range(tspan[1], tspan[2], length=501) # Time points
dataset = [wr_data, Tg_data, θ_data, vr_data, t_data]
# Convert dataset into Vector{Vector{Float64}} (just in case)
dataset = [Float64.(vec) for vec in dataset]
# Define ODE problem
prob_model = ODEProblem(wind_turbine, u0, tspan, p)
# Solve the ODE using stiffness detection and auto-switching algorithm
dt = 0.01 # Sampling time
sol_model = solve(prob_model, AutoTsit5(Rosenbrock23()), saveat=dt)
# Plot trajectory and compare it to data (they logically don't resemble each other due to the unknown real wind turbine parameters)
plot(sol_model, xlabel="Time", ylabel="Rotor Speed", label="Rotor speed (model)")
plot!(t_data, wr_data, label="Rotor speed (data)")
plot!(legend=:topright)
# Define BNN architecture with adjusted input size
rng = Random.default_rng()
Random.seed!(rng, 0)
n = 15
chain = Lux.Chain(
Lux.Dense(1, n, Lux.σ),
Lux.Dense(n, n, Lux.σ),
Lux.Dense(n, n, Lux.σ),
Lux.Dense(n, 6)
)
ps, st = Lux.setup(rng, chain) |> Lux.f64
# Define Bayesian PINN solver
alg = BNNODE(chain;
dataset=dataset,
draw_samples=1000,
l2std=[0.1],
phystd=[0.1],
priorsNNw=(0.0, 0.05),
param=[Normal(0.5176, 0.5), Normal(116, 50), Normal(0.4, 0.4), Normal(5, 4), Normal(21, 10), Normal(0.0068, 0.004)],
progress=true) # I have to think carefully about the arguments in this function
# Solve the ODE using Bayesian neural network
sol_lux_pestim = solve(prob_model, alg; saveat = dt)
The error message after executing the last line is the following:
┌ Error: Exception while generating log record in module NeuralPDE at C:\Users\FX03NI\.julia\packages\NeuralPDE\Xp1OF\src\advancedHMC_MCMC.jl:510
│ exception =
│ MethodError: no method matching +(::Vector{Float64}, ::Float64)
│ For element-wise addition, use broadcasting with dot syntax: array .+ scalar
│
│ Closest candidates are:
│ +(::Any, ::Any, ::Any, ::Any...)
│ @ Base operators.jl:587
│ +(::ChainRulesCore.NoTangent, ::Any)
│ @ ChainRulesCore C:\Users\FX03NI\.julia\packages\ChainRulesCore\zgT0R\src\tangent_arithmetic.jl:59
│ +(::Any, ::ChainRulesCore.NoTangent)
│ @ ChainRulesCore C:\Users\FX03NI\.julia\packages\ChainRulesCore\zgT0R\src\tangent_arithmetic.jl:60
│ ...
│
│ Stacktrace:
│ [1] wind_turbine(u::Vector{Float64}, p::Vector{Float64}, t::Float64)
│ @ Main .\In[20]:28
│ [2] (::ODEFunction{false, SciMLBase.AutoSpecialize, typeof(wind_turbine), LinearAlgebra.UniformScaling{Bool}, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, typeof(SciMLBase.DEFAULT_OBSERVED), Nothing, Nothing, Nothing, Nothing})(::Vector{Float64}, ::Vararg{Any})
│ @ SciMLBase C:\Users\FX03NI\.julia\packages\SciMLBase\NjslX\src\scimlfunctions.jl:2184
│ [3] (::NeuralPDE.var"#402#405"{ODEFunction{false, SciMLBase.AutoSpecialize, typeof(wind_turbine), LinearAlgebra.UniformScaling{Bool}, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, typeof(SciMLBase.DEFAULT_OBSERVED), Nothing, Nothing, Nothing, Nothing}, Vector{Float64}, Vector{Float64}, Matrix{Float64}})(i::Int64)
│ @ NeuralPDE .\none:0
│ [4] iterate
│ @ .\generator.jl:47 [inlined]
│ [5] collect(itr::Base.Generator{UnitRange{Int64}, NeuralPDE.var"#402#405"{ODEFunction{false, SciMLBase.AutoSpecialize, typeof(wind_turbine), LinearAlgebra.UniformScaling{Bool}, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, typeof(SciMLBase.DEFAULT_OBSERVED), Nothing, Nothing, Nothing, Nothing}, Vector{Float64}, Vector{Float64}, Matrix{Float64}}})
│ @ Base .\array.jl:834
│ [6] innerdiff(Tar::NeuralPDE.LogTargetDensity{Chain{@NamedTuple{layer_1::Dense{true, typeof(sigmoid_fast), typeof(glorot_uniform), typeof(zeros32)}, layer_2::Dense{true, typeof(sigmoid_fast), typeof(glorot_uniform), typeof(zeros32)}, layer_3::Dense{true, typeof(sigmoid_fast), typeof(glorot_uniform), typeof(zeros32)}, layer_4::Dense{true, typeof(identity), typeof(glorot_uniform), typeof(zeros32)}}, Nothing}, @NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{}, layer_4::@NamedTuple{}}, GridTraining{Float64}, @NamedTuple{layer_1::@NamedTuple{weight::Matrix{Float32}, bias::Matrix{Float32}}, layer_2::@NamedTuple{weight::Matrix{Float32}, bias::Matrix{Float32}}, layer_3::@NamedTuple{weight::Matrix{Float32}, bias::Matrix{Float32}}, layer_4::@NamedTuple{weight::Matrix{Float32}, bias::Matrix{Float32}}}, Vector{ContinuousDistribution}, Vector{Vector{Float64}}}, f::ODEFunction{false, SciMLBase.AutoSpecialize, typeof(wind_turbine), LinearAlgebra.UniformScaling{Bool}, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, typeof(SciMLBase.DEFAULT_OBSERVED), Nothing, Nothing, Nothing, Nothing}, autodiff::Bool, t::Vector{Float64}, θ::Vector{Float64}, ode_params::Vector{Float64})
│ @ NeuralPDE C:\Users\FX03NI\.julia\packages\NeuralPDE\Xp1OF\src\advancedHMC_MCMC.jl:237
│ [7] getlogpdf(strategy::GridTraining{Float64}, Tar::NeuralPDE.LogTargetDensity{Chain{@NamedTuple{layer_1::Dense{true, typeof(sigmoid_fast), typeof(glorot_uniform), typeof(zeros32)}, layer_2::Dense{true, typeof(sigmoid_fast), typeof(glorot_uniform), typeof(zeros32)}, layer_3::Dense{true, typeof(sigmoid_fast), typeof(glorot_uniform), typeof(zeros32)}, layer_4::Dense{true, typeof(identity), typeof(glorot_uniform), typeof(zeros32)}}, Nothing}, @NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{}, layer_4::@NamedTuple{}}, GridTraining{Float64}, @NamedTuple{layer_1::@NamedTuple{weight::Matrix{Float32}, bias::Matrix{Float32}}, layer_2::@NamedTuple{weight::Matrix{Float32}, bias::Matrix{Float32}}, layer_3::@NamedTuple{weight::Matrix{Float32}, bias::Matrix{Float32}}, layer_4::@NamedTuple{weight::Matrix{Float32}, bias::Matrix{Float32}}}, Vector{ContinuousDistribution}, Vector{Vector{Float64}}}, f::Function, autodiff::Bool, tspan::Tuple{Float64, Float64}, ode_params::Vector{Float64}, θ::Vector{Float64})
│ @ NeuralPDE C:\Users\FX03NI\.julia\packages\NeuralPDE\Xp1OF\src\advancedHMC_MCMC.jl:150
│ [8] physloglikelihood(Tar::NeuralPDE.LogTargetDensity{Chain{@NamedTuple{layer_1::Dense{true, typeof(sigmoid_fast), typeof(glorot_uniform), typeof(zeros32)}, layer_2::Dense{true, typeof(sigmoid_fast), typeof(glorot_uniform), typeof(zeros32)}, layer_3::Dense{true, typeof(sigmoid_fast), typeof(glorot_uniform), typeof(zeros32)}, layer_4::Dense{true, typeof(identity), typeof(glorot_uniform), typeof(zeros32)}}, Nothing}, @NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{}, layer_4::@NamedTuple{}}, GridTraining{Float64}, @NamedTuple{layer_1::@NamedTuple{weight::Matrix{Float32}, bias::Matrix{Float32}}, layer_2::@NamedTuple{weight::Matrix{Float32}, bias::Matrix{Float32}}, layer_3::@NamedTuple{weight::Matrix{Float32}, bias::Matrix{Float32}}, layer_4::@NamedTuple{weight::Matrix{Float32}, bias::Matrix{Float32}}}, Vector{ContinuousDistribution}, Vector{Vector{Float64}}}, θ::Vector{Float64})
│ @ NeuralPDE C:\Users\FX03NI\.julia\packages\NeuralPDE\Xp1OF\src\advancedHMC_MCMC.jl:137
│ [9] macro expansion
│ @ .\logging.jl:373 [inlined]
│ [10] ahmc_bayesian_pinn_ode(prob::ODEProblem{Float64, Tuple{Float64, Float64}, false, Vector{Float64}, ODEFunction{false, SciMLBase.AutoSpecialize, typeof(wind_turbine), LinearAlgebra.UniformScaling{Bool}, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, typeof(SciMLBase.DEFAULT_OBSERVED), Nothing, Nothing, Nothing, Nothing}, @Kwargs{}, SciMLBase.StandardODEProblem}, chain::Chain{@NamedTuple{layer_1::Dense{true, typeof(sigmoid_fast), typeof(glorot_uniform), typeof(zeros32)}, layer_2::Dense{true, typeof(sigmoid_fast), typeof(glorot_uniform), typeof(zeros32)}, layer_3::Dense{true, typeof(sigmoid_fast), typeof(glorot_uniform), typeof(zeros32)}, layer_4::Dense{true, typeof(identity), typeof(glorot_uniform), typeof(zeros32)}}, Nothing}; strategy::Type{GridTraining}, dataset::Vector{Vector{Float64}}, init_params::Nothing, draw_samples::Int64, physdt::Float64, l2std::Vector{Float64}, phystd::Vector{Float64}, priorsNNw::Tuple{Float64, Float64}, param::Vector{Normal{Float64}}, nchains::Int64, autodiff::Bool, Kernel::Type, Adaptorkwargs::@NamedTuple{Adaptor::UnionAll, Metric::UnionAll, targetacceptancerate::Float64}, Integratorkwargs::@NamedTuple{Integrator::UnionAll}, MCMCkwargs::@NamedTuple{n_leapfrog::Int64}, progress::Bool, verbose::Bool)
│ @ NeuralPDE C:\Users\FX03NI\.julia\packages\NeuralPDE\Xp1OF\src\advancedHMC_MCMC.jl:510
│ [11] __solve(::ODEProblem{Float64, Tuple{Float64, Float64}, false, Vector{Float64}, ODEFunction{false, SciMLBase.AutoSpecialize, typeof(wind_turbine), LinearAlgebra.UniformScaling{Bool}, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, typeof(SciMLBase.DEFAULT_OBSERVED), Nothing, Nothing, Nothing, Nothing}, @Kwargs{}, SciMLBase.StandardODEProblem}, ::BNNODE{Chain{@NamedTuple{layer_1::Dense{true, typeof(sigmoid_fast), typeof(glorot_uniform), typeof(zeros32)}, layer_2::Dense{true, typeof(sigmoid_fast), typeof(glorot_uniform), typeof(zeros32)}, layer_3::Dense{true, typeof(sigmoid_fast), typeof(glorot_uniform), typeof(zeros32)}, layer_4::Dense{true, typeof(identity), typeof(glorot_uniform), typeof(zeros32)}}, Nothing}, UnionAll, @NamedTuple{Integrator::UnionAll}, @NamedTuple{Adaptor::UnionAll, Metric::UnionAll, targetacceptancerate::Float64}, @NamedTuple{n_leapfrog::Int64}, Nothing, Nothing, Vector{Normal{Float64}}, Vector{Vector{Float64}}}; dt::Nothing, timeseries_errors::Bool, save_everystep::Bool, adaptive::Bool, abstol::Float32, reltol::Float32, verbose::Bool, saveat::Float64, maxiters::Nothing, numensemble::Int64)
│ @ NeuralPDE C:\Users\FX03NI\.julia\packages\NeuralPDE\Xp1OF\src\BPINN_ode.jl:197
│ [12] __solve
│ @ C:\Users\FX03NI\.julia\packages\NeuralPDE\Xp1OF\src\BPINN_ode.jl:171 [inlined]
│ [13] #solve_call#44
│ @ C:\Users\FX03NI\.julia\packages\DiffEqBase\O8cUq\src\solve.jl:612 [inlined]
│ [14] solve_call
│ @ C:\Users\FX03NI\.julia\packages\DiffEqBase\O8cUq\src\solve.jl:569 [inlined]
│ [15] solve_up(prob::ODEProblem{Float64, Tuple{Float64, Float64}, false, Vector{Float64}, ODEFunction{false, SciMLBase.AutoSpecialize, typeof(wind_turbine), LinearAlgebra.UniformScaling{Bool}, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, typeof(SciMLBase.DEFAULT_OBSERVED), Nothing, Nothing, Nothing, Nothing}, @Kwargs{}, SciMLBase.StandardODEProblem}, sensealg::Nothing, u0::Float64, p::Vector{Float64}, args::BNNODE{Chain{@NamedTuple{layer_1::Dense{true, typeof(sigmoid_fast), typeof(glorot_uniform), typeof(zeros32)}, layer_2::Dense{true, typeof(sigmoid_fast), typeof(glorot_uniform), typeof(zeros32)}, layer_3::Dense{true, typeof(sigmoid_fast), typeof(glorot_uniform), typeof(zeros32)}, layer_4::Dense{true, typeof(identity), typeof(glorot_uniform), typeof(zeros32)}}, Nothing}, UnionAll, @NamedTuple{Integrator::UnionAll}, @NamedTuple{Adaptor::UnionAll, Metric::UnionAll, targetacceptancerate::Float64}, @NamedTuple{n_leapfrog::Int64}, Nothing, Nothing, Vector{Normal{Float64}}, Vector{Vector{Float64}}}; kwargs::@Kwargs{saveat::Float64})
│ @ DiffEqBase C:\Users\FX03NI\.julia\packages\DiffEqBase\O8cUq\src\solve.jl:1080
│ [16] solve_up
│ @ C:\Users\FX03NI\.julia\packages\DiffEqBase\O8cUq\src\solve.jl:1066 [inlined]
│ [17] #solve#51
│ @ C:\Users\FX03NI\.julia\packages\DiffEqBase\O8cUq\src\solve.jl:1003 [inlined]
│ [18] top-level scope
│ @ In[43]:2
│ [19] eval
│ @ .\boot.jl:385 [inlined]
│ [20] include_string(mapexpr::typeof(REPL.softscope), mod::Module, code::String, filename::String)
│ @ Base .\loading.jl:2076
│ [21] softscope_include_string(m::Module, code::String, filename::String)
│ @ SoftGlobalScope C:\Users\FX03NI\.julia\packages\SoftGlobalScope\u4UzH\src\SoftGlobalScope.jl:65
│ [22] execute_request(socket::ZMQ.Socket, msg::IJulia.Msg)
│ @ IJulia C:\Users\FX03NI\.julia\packages\IJulia\Vo51o\src\execute_request.jl:67
│ [23] #invokelatest#2
│ @ .\essentials.jl:892 [inlined]
│ [24] invokelatest
│ @ .\essentials.jl:889 [inlined]
│ [25] eventloop(socket::ZMQ.Socket)
│ @ IJulia C:\Users\FX03NI\.julia\packages\IJulia\Vo51o\src\eventloop.jl:8
│ [26] (::IJulia.var"#15#18")()
│ @ IJulia C:\Users\FX03NI\.julia\packages\IJulia\Vo51o\src\eventloop.jl:38
└ @ NeuralPDE C:\Users\FX03NI\.julia\packages\NeuralPDE\Xp1OF\src\advancedHMC_MCMC.jl:510
┌ Info: Current Prior Log-likelihood :
└ priorweights(ℓπ, initial_θ) = -15096.924089369746
┌ Info: Current MSE against dataset Log-likelihood :
└ L2LossData(ℓπ, initial_θ) = -95277.57901401068