Parameters optimization with pinns of a RC model

I’m trying to optimize the parameters of this first state of a rc model using NeuralPDE but I get error. what am I doing wrong?


using NeuralPDE, Lux, ModelingToolkit, Optimization, OptimizationOptimJL, OrdinaryDiffEq,
      Plots
import ModelingToolkit: Interval, infimum, supremum
using DifferentialEquations, Plots, Flux,Optim, DiffEqFlux, DataInterpolations,Random, ComponentArrays, Lux
using  OptimizationOptimisers,OptimizationNLopt
rng = Random.default_rng()
using CSV
using DataFrames
using Statistics: mean
using OptimizationPolyalgorithms
using DiffEqFlux: group_ranges
#exogenous variables
df = CSV.read("/Users/marco/OneDrive/Desktop/Tesi/Julia_files/measured_data_ex_00.csv", DataFrame)
df=repeat(df, outer=10)

registered_gas_flow = df[:, :2]
Registered_Temperature = df[:, 3]
sec_Temp = [mean(Registered_Temperature[i:i+59]) for i in 1:60:length(Registered_Temperature)-59]
sec_gas = [mean(registered_gas_flow[i:i+59]) for i in 1:60:length(registered_gas_flow)-59]


tspan= (0.0f0,3000.0f0)
datasize = 3000
tsteps= range(tspan[1], tspan[2], length = datasize)

Ext_temperature = LinearInterpolation(sec_Temp,tsteps);
power = LinearInterpolation(sec_gas,tsteps);

function ext_Temp(tsteps)
return Ext_temperature(tsteps)
end
    
function ext_power(tsteps)
return power(tsteps)
end   


#defining the problem
@parameters t, Rv, Ci
@variables x(..)
Dt = Differential(t)

eqs = [Dt(x(t)) ~ 1/(Rv*Ci) * (ext_Temp(t) - x(t)) + ext_power(t)/Ci]

bcs = [x(0) ~ 20.0]
domains = [t ∈ Interval(0.0, 3000.0)]
dt = 0.01

input_ = length(domains)
n = 8
chain1 = Lux.Chain(Lux.Dense(input_, n, Lux.σ), Lux.Dense(n, n, Lux.σ), Lux.Dense(n, n, Lux.σ),
                   Lux.Dense(n, 1))

# calculate the solution of the Lorenz system with OrdinaryDiffEq.jl based on the adaptivity of the ODE solver.
#This is used to introduce non-uniformity to the time series
function RC!(du,u,p,t)

    du[1] = 1/(10000*10) * (ext_Temp(t)- u[1]) + ext_power(t)/10
    end

u0 = [20.0]
tspan = (0.0, 3000.0)
prob = ODEProblem(RC!, u0, tspan)
sol = solve(prob, Tsit5(), dt = 0.1)

ts = [infimum(d.domain):dt:supremum(d.domain) for d in domains][1]
function getData(sol)
    data = []
    us = hcat(sol(ts).u...)
    ts_ = hcat(sol(ts).t...)
    return [us, ts_]
end
data = getData(sol)

(u_, t_) = data
len = length(data[2])

#Loss term based on the data that we have to optimize the parameters

depvars = [:x]
function additional_loss(phi, θ, p)
    return sum(sum(abs2, phi[i](t_, θ[depvars[i]]) .- u_[[i], :]) / len for i in 1:1:1)
end
#A discretize algorithm for the ModelingToolkit PDESystem interface, which transforms a PDESystem into an OptimizationProblem using the Physics-Informed Neural Networks (PINN) methodology
discretization = NeuralPDE.PhysicsInformedNN([chain1],
                                             NeuralPDE.GridTraining(dt), param_estim = true,
                                             additional_loss = additional_loss)
@named pde_system = PDESystem(eqs, bcs, domains, [t], [x(t)], [ Rv, Ci ],
                              defaults = Dict([p .=> 1.0 for p in [ Rv, Ci]]))
prob = NeuralPDE.discretize(pde_system, discretization)
callback = function (p, l)
    println("Current loss is: $l")
    return false
end
res = Optimization.solve(prob, BFGS(); callback = callback, maxiters = 5000)
p_ = res.u[(end - 1):end] 


minimizers = [res.u.depvar[depvars[i]] for i in 1:1]
ts = [infimum(d.domain):(dt / 10):supremum(d.domain) for d in domains][1]
u_predict = [[discretization.phi[i]([t], minimizers[i])[1] for t in ts] for i in 1:1]
plot(sol)
plot!(ts, u_predict, label = ["x(t)"])
println(p_)

Just some tips to make it more likely you receive answers to your questions.

It would be helpful if you

  • Defined the acronyms you use, what is an “rc model”?
  • Place the post in an appropriate category so that people that are able to help you will find the post.
  • Say what the actual error you get is, no one can reproduce your error since you’re reading a file from your local machine.
  • Try to minimize the example, you have posted a ton of code which is perhaps not very exciting to read through for anyone that would try to help you.
1 Like

apologies. meanwhile, an RC (Resistance-Capacitance) model for the consumption of a house is a simplified model used to represent the energy consumption behavior of a residence. This type of model takes into account two main components: resistance and capacity.
In the example I have data concerning the house temperature and power, used to define my ode with exogenous variables. so my goal is to use pinns to find the parameters of this ode generated with external variables
This is the error I get:


ERROR: Compiling Tuple{typeof(RuntimeGeneratedFunctions.generated_callfunc), RuntimeGeneratedFunctions.RuntimeGeneratedFunction{(:cord, Symbol("##θ#293"), :phi, :derivative, :integral, :u, :p), NeuralPDE.var"#_RGF_ModTag", NeuralPDE.var"#_RGF_ModTag", (0xd2e13aa8, 0x792206f7, 0xb326e346, 0xdfb20b7b, 0x919c7ed6), Expr}, Matrix{Float64}, ComponentVector{Float64, Vector{Float64}, Tuple{Axis{(depvar = ViewAxis(1:169, Axis(x = ViewAxis(1:169, Axis(layer_1 = ViewAxis(1:16, Axis(weight = ViewAxis(1:8, ShapedAxis((8, 1), NamedTuple())), bias = ViewAxis(9:16, ShapedAxis((8, 1), NamedTuple())))), layer_2 = ViewAxis(17:88, Axis(weight = ViewAxis(1:64, ShapedAxis((8, 8), NamedTuple())), bias = ViewAxis(65:72, ShapedAxis((8, 1), NamedTuple())))), layer_3 = ViewAxis(89:160, Axis(weight = ViewAxis(1:64, ShapedAxis((8, 8), NamedTuple())), bias = ViewAxis(65:72, ShapedAxis((8, 1), NamedTuple())))), layer_4 = ViewAxis(161:169, Axis(weight = ViewAxis(1:8, ShapedAxis((1, 8), NamedTuple())), bias = ViewAxis(9:9, ShapedAxis((1, 1), NamedTuple())))))),)), p = 170:171)}}}, Vector{NeuralPDE.Phi{Lux.Chain{NamedTuple{(:layer_1, :layer_2, :layer_3, :layer_4), Tuple{Lux.Dense{true, typeof(sigmoid_fast), typeof(Lux.glorot_uniform), typeof(Lux.zeros32)}, Lux.Dense{true, typeof(sigmoid_fast), typeof(Lux.glorot_uniform), typeof(Lux.zeros32)}, Lux.Dense{true, typeof(sigmoid_fast), typeof(Lux.glorot_uniform), typeof(Lux.zeros32)}, Lux.Dense{true, typeof(identity), typeof(Lux.glorot_uniform), typeof(Lux.zeros32)}}}}, NamedTuple{(:layer_1, :layer_2, :layer_3, :layer_4), NTuple{4, NamedTuple{(), Tuple{}}}}}}, typeof(NeuralPDE.numeric_derivative), NeuralPDE.var"#269#276"{NeuralPDE.var"#269#270#277"{typeof(NeuralPDE.numeric_derivative)}, Dict{Symbol, Int64}, Dict{Symbol, Int64}, GridTraining{Float64}}, NeuralPDE.var"#12#13", Vector{Float64}}: MethodError: no method matching size(::LinearInterpolation{Vector{Float64}, StepRangeLen{Float32, Float64, Float64, Int64}, true, Float64})
The applicable method may be too new: running in world age 33674, while current world is 34669.

Closest candidates are:
  size(::DataInterpolations.AbstractInterpolation{true}) (method too new to be called from this world context.)
   @ DataInterpolations C:\Users\marco\.julia\packages\DataInterpolations\8HIzM\src\DataInterpolations.jl:8
  size(::DataInterpolations.AbstractInterpolation) (method too new to be called from this world context.)
   @ DataInterpolations C:\Users\marco\.julia\packages\DataInterpolations\8HIzM\src\DataInterpolations.jl:7
  size(::AbstractArray{T, N}, ::Any) where {T, N}
   @ Base abstractarray.jl:42
  ...

1 Like

The problem is definitely in how I add my exogenous variable to the equation. I obtain the variable at each time point t through interpolation between the time step and the value I had in an Excel file.

It’s an ODE, so did you try NNODE like the docs show?

NNODE only supports ODEs which are written in the out-of-place form, i.e. du = f(u,p,t) , and not f(du,u,p,t) * and my model is defined in this way.
Can you suggest some other methods to define an ode with exogenous variables and use pinns to predict?

Can you just change the definition? That seems like a very small change to make it work?

Or can you isolate the world age issue?

https://docs.sciml.ai/NeuralPDE/stable/tutorials/param_estim/

Okay I’ll try to change the definition. Instead for parameter optimization I was following this link that uses pinns to optimize ode parameters. is there an error in my function definition?

Hi, I faced the exact same error and the solution was to change the order of package imports at the beginning of the file. I have absolutely no idea why, but it magically solved the issue for me.

:thinking: say more?

Here is a MWE:

using Pkg
Pkg.activate(mktempdir())
Pkg.add([(; name = "Interpolations", version = "v0.14.7"), 
         (; name = "Lux", version = "v0.5.6"),
         (; name = "NeuralPDE", version = "v5.9.0"),
         (; name = "OptimizationOptimisers", version = "v0.1.6")])
using NeuralPDE
using Interpolations
using Lux, OptimizationOptimisers

itp = LinearInterpolation(0:1, 0:1)
@register_symbolic (itp::typeof(itp))(x)
NeuralPDE.dottable_(itp::typeof(itp)) = true

@parameters x
@variables f(..)
eq = f(x) ~ itp(x)
bc = f(0) ~ 0
@named pde_system = PDESystem(eq, [bc], [x ∈ (0, 1)], [x], [f(x)])

discretization = PhysicsInformedNN(Dense(1, 1), GridTraining(0.1))
prob = discretize(pde_system, discretization)
solve(prob, Adam(); maxiters = 10)

This causes the same error as in the first post. After restarting Julia, putting using Interpolations before using NeuralPDE solves the issue.

I have error in both cases using your code.
@chrisrackauckas I always have to make the libraries explicit. Example in interpolation I have to write Interpolation.LinearInterpolation. why?

Re initializing the repl, it no longer gives error

Does reversing the order of your imports solve the issue for you?

using OptimizationPolyalgorithms, DataFrames, CSV, OptimizationNLopt,
    OptimizationOptimisers, Lux, ComponentArrays, Random,
    DataInterpolations, DiffEqFlux, Optim, Flux, DifferentialEquations,
    Plots, OrdinaryDiffEq, OptimizationOptimJL,
    Optimization, ModelingToolkit, Lux, NeuralPDE
import ModelingToolkit: Interval, infimum, supremum
using Statistics: mean
using DiffEqFlux: group_range

We have added parameter estimation to NNODE which should make this straightforward: