Non-converging chains and slow runtime in Bayesian parameter estimation for wind turbine system

Hello all,

I’m quite new to Turing and Bayesian inference in general, and I’m trying to estimate parameters for the power coefficient Cp in a wind turbine aerodynamic model. The parametric expression for Cp has 6 parameters, c1, c2, c3, c4, c5, c6, where the 1st one is not estimated as it is just scaling c2, c3, c4 and thus it is redundant. The model has 4 states wr, Tg, β, wr_err and is excited by an exogenous input, vr, introduced in the model as a spline interpolation vr_interp(t). As the states have very different magnitudes and I wasn’t sure how that would affect the algorithm, I first compute the operating points of the system and scale each variable in the data by the respective operating point. Then, I also scale the states in the model, and I plot the data and the model simulation states for comparison.



Then I define the Turing @model, I set the prior for σ as a truncated Normal with variance 0.03, as acceptable (scaled) trajectories should not diverge much more than that with respect to the (scaled) data, and I also set priors for my parameters c2, c3, c4, c5, c6 for now. To test the algorithm, I cheat a bit and set the mean values of the Normal priors to the values that I know yield a quite accurate trajectory, which are also the values that I’ve used in the model simulation shown in the previous graphs.

Finally I simulate 2 chains, with NUTS(0.3), 1000 samples per chain. Issues:

  • It is taking a really long time to simulate for a model that has just 4 states and 5 parameters, even with relatively narrow priors and NUTS(0.3), it takes around 8 hours. If I increase the NUTS acceptance rate it takes days.
  • The chains do not converge, as can be seen in the attached pictures.



Any help on how to solve these issues would be greatly appreciated. My code can be seen below:

using Turing
using DifferentialEquations

using StatsPlots

using LinearAlgebra

using XLSX

using Random
Random.seed!(14);

# Load data from Excel sheet
function load_data(filename)
    data = XLSX.readxlsx(filename)["Ark1"]  # Assuming your data is in Sheet1
    wr = data["A"][1:4000]  # Assuming column A contains angular speed data (rad/s)
    Tg = data["B"][1:4000]  # Assuming column B contains generator torque data (Nm)
    #β = data["C"]  # Assuming column C contains pitch angle data (rad)
    β = data["F"][1:4000]  # Assuming column F contains pitch angle data (degrees)
    vr = data["D"][1:4000]  # Assuming column D contains wind speed data (m/s)
    return wr, Tg, β, vr
end

# Load data from Excel sheet
filename = "…\\data_14ms_4000.xlsx" 
wr_data, Tg_data, β_data, vr_data = load_data(filename)

# Any to Float64 (needed for interpolation)
wr_data = map(Float64, wr_data)
Tg_data = map(Float64, Tg_data)
β_data = map(Float64, β_data)
vr_data = map(Float64, vr_data)
all_data = hcat(wr_data, Tg_data, β_data)'

tspan = (30.0, 229.95) # 2000->129.95, 4000->229.95, 12000 -> 629.95
t_data = range(tspan[1], tspan[2], length=4000)  # Time points

using DataInterpolations
vr_interp = BSplineInterpolation(vr_data[1:end], t_data[1:end], 2, :ArcLen, :Average)

# **********************************
# COMPUTE EQUILIBRIUM POINT TO SCALE VARIABLES
# **********************************

using NLsolve

# Define constants
μd = 0.05
Rr = 120.998
ρ = 1.225
Ar = π * Rr^2
Jr = 321699000
Jg = 3.777e6
c1 = 0.1661
c2 = 115.98
c3 = 1.4138
c4 = 5.6336
c5 = 16.2881
c6 = 0.0410
tau_Tg = 0.05
tau_β = 0.01
wr_rated = 0.7917
Kp_Tg = 2.2476e7
Ki_Tg = 0
Tg_off = 2.05e7
Kp_β = 30.2670
Ki_β = 7.9968

vr_eq = 14

function f(x)
    [((1 - μd) * 1 / (2 * x[1]) * ρ * Ar * vr_eq^3 * (c1 * (c2 * (1 / (x[1] * Rr / vr_eq + 0.08 * x[3]) - 0.035 / (x[3]^3 + 1)) - c3 * x[3] - c4) * exp(-c5 * (1 / (x[1] * Rr / vr_eq + 0.08 * x[3]) - 0.035 / (x[3]^3 + 1))) + c6 * x[1] * Rr / vr_eq) - x[2]) / (Jr + Jg),
        (-Kp_Tg * (x[1] - wr_rated) + Tg_off - x[2]) / tau_Tg,
        (Kp_β * (x[1] - wr_rated) + Ki_β * x[4] - x[3]) / tau_β,
        x[1] - wr_rated]
end

op = nlsolve(f, [0.79, 2.05e7, 9, 0])
wr_op = op.zero[1]
Tg_op = op.zero[2]
β_op = op.zero[3]
wr_err_op = op.zero[4]
op_points = [wr_op, Tg_op, β_op, wr_err_op] # Equilibrium points for linear system with constant exogenous input 'vr'

# Scaling parameters
wr_scale = wr_op
Tg_scale = Tg_op
β_scale = β_op
wr_err_scale = wr_err_op

wr_data_scaled = wr_data/wr_scale
Tg_data_scaled = Tg_data/Tg_scale
β_data_scaled = β_data/β_scale

all_data_scaled = hcat(wr_data_scaled, Tg_data_scaled, β_data_scaled)'

# **********************************
# WIND TURBINE MODEL
# **********************************

# Define constants
μd = 0.05
Rr = 120.998
ρ = 1.225
Ar = π * Rr^2
Jr = 321699000
Jg = 3.777e6
c1 = 0.1661
c2 = 115.98
c3 = 1.4138
c4 = 5.6336
c5 = 16.2881
c6 = 0.0410
tau_Tg = 0.05
tau_β = 0.01
wr_rated = 0.7917
Kp_Tg = 2.2476e7
Ki_Tg = 0
Tg_off = 2.05e7
Kp_β = 30.2670
Ki_β = 7.9968

# Define wind turbine model with exogenous inputs
function wind_turbine(du, u, p, t)

    # Parameters
    c2, c3, c4, c5, c6 = p
    
    # States (rel)
    wr = u[1]
    Tg = u[2]
    β = u[3]
    wr_err = u[4]

    # States (abs)
    wr_abs = wr * wr_scale
    Tg_abs = Tg * Tg_scale
    β_abs = β * β_scale
    wr_err_abs = wr_err * wr_err_scale
    
    # Controller
    Tg_ref_abs = -Kp_Tg * (wr_abs - wr_rated) + Tg_off
    β_ref_abs = Kp_β * (wr_abs - wr_rated) + Ki_β * wr_err_abs
    
    # Calculate tip-speed ratio λ
    λ = wr_abs * Rr / vr_interp(t)
    
     # Calculate 1/λi (numerical approx. for Cp(θ, λ))
    λi_inv = 1 / (λ + 0.08 * β_abs) - 0.035 / (β_abs^3 + 1)
    
    # Calculate Cp (numerical approx. for Cp(θ, λ))
    Cp = c1 * (c2 * λi_inv - c3 * β_abs - c4) * exp(-c5 * λi_inv) + c6 * λ

    # Calculate aerodynamic torque
    Tr = 1 / (2 * wr_abs) * ρ * Ar * vr_interp(t)^3 * Cp # max(0, Cp)
    
    # State derivatives (rel)
    du[1] = ((1 - μd) * Tr - Tg_abs) / (Jr + Jg) / wr_scale
    du[2] = (Tg_ref_abs - Tg_abs) / tau_Tg / Tg_scale
    du[3] = (β_ref_abs - β_abs) / tau_β / β_scale
    du[4] = (wr_abs - wr_rated) / wr_err_scale
    
end

# Define initial-value problem.
wr0_scaled = wr_data_scaled[1]
Tg0_scaled = Tg_data_scaled[1]
β0_scaled = β_data_scaled[1]
wr_err0_scaled = wr0_scaled

vr0 = vr_data[1]

u0_scaled = [wr0_scaled, Tg0_scaled, β0_scaled, wr_err0_scaled]

p = [c2, c3, c4, c5, c6]  # Model parameters


prob_model = ODEProblem(wind_turbine, u0_scaled, tspan, p)

dt = 0.05  # Sampling time
sol_model = solve(prob_model, Tsit5(), saveat=dt)

t_sim = sol_model.t
u_sim = sol_model.u

wr_sim = [inner_vector[1] for inner_vector in u_sim]
Tg_sim = [inner_vector[2] for inner_vector in u_sim]
β_sim = [inner_vector[3] for inner_vector in u_sim]
wr_err_sim = [inner_vector[4] for inner_vector in u_sim]

# Compare data and model simulation
plot(t_sim, wr_sim, xlabel="Time", ylabel="Rotor Speed", label="Rotor speed (model)")
plot!(t_data, wr_data_scaled, label="Rotor speed (data)")
plot!(legend=:topright)

plot(t_sim, Tg_sim, xlabel="Time", ylabel="Generator torque", label="Generator torque (model)")
plot!(t_data, Tg_data_scaled, label="Generator torque (data)")
plot!(legend=:topright)

plot(t_sim, β_sim, xlabel="Time", ylabel="Pitch angle", label="Pitch angle (model)")
plot!(t_data, β_data_scaled, label="Pitch angle (data)")
plot!(legend=:topright)

@model function fitlv(data, prob)
    
    # Priors for variances
    σ ~ truncated(Normal(0, 0.03); lower=0, upper=Inf)

    # Priors for parameters    
    c2 ~ truncated(Normal(115.98, 1); lower=113, upper=118)
    c3 ~ truncated(Normal(1.4138, 0.5); lower=0, upper=3)
    c4 ~ truncated(Normal(5.6336, 0.5); lower=3, upper=7)
    c5 ~ truncated(Normal(16.2881, 2); lower=12, upper=20)
    c6 ~ truncated(Normal(0.04, 0.02); lower=0, upper=0.08)

    # Simulate wind turbine model. 
    p = [c2,c3,c4,c5,c6]
    predicted = solve(prob_model, Tsit5(); p=p, saveat=dt, save_idxs=[1, 2, 3])

    # Observations.
    for i in 1:length(predicted)
        data[:, i] ~ MvNormal(predicted[i], σ^2 * I)
    end

    return nothing
end


model = fitlv(all_data_scaled, prob_model)


chain = sample(model, NUTS(0.3), MCMCSerial(), 1000, 2; progress=true)
2 Likes

I attach below the excel file with data for reproducibility:

I’m on my phone so I’m not sure exactly what’s up in your code. I see you define a bunch of global constants. These should all be declared const, otherwise their use will massively slow down Julia’s compilation (non const globals are not type stable). Alternatively define a struct to contain them and pass the struct to your functions

Also you seem to use truncated normals and if I remember these don’t work well with NUTS and auto diff. I can’t remember why, I just remember using them and having really slow results.

Yup, this indeed seem to be a scope issue as @dlakelan suggested. Once moving everything into a function, the ETA for drawing a 1000 samples with 0.8 target acceptance rate is now 1 hour and 47 minutes. (The actual time is probably much longer though.) I think some optimization should be possible. It seems that the ODE in question can get super stiff, slowing down things:

  • Initialize the chain from a set of parameters where the ODE is not too stiff
  • Set the priors to regions where the ODE is not stiff

Another performance tip is that for low-dimensional and potentially complicated posteriors like this slice samplers can do wonders. However, the stiffness problem needs to be dealt with first.

Here are some quick experiments that I ran:

using SliceSampling
chain = sample(model, externalsampler(HitAndRun(SliceSteppingOut(0.1; max_stepping_out=10, max_proposals=10000))), 1000; initial_params=[0.3, 115, 1.4, 5.6, 16., 0.04])

This takes 15 minutes on my laptop:

Chains MCMC chain (1000×7×1 Array{Float64, 3}):

Iterations        = 1:1:1000
Number of chains  = 1
Samples per chain = 1000
Wall duration     = 855.71 seconds
Compute duration  = 855.71 seconds
parameters        = σ, c2, c3, c4, c5, c6
internals         = lp

Summary Statistics
  parameters       mean       std      mcse   ess_bulk   ess_tail      rhat   ess_per_sec 
      Symbol    Float64   Float64   Float64    Float64    Float64   Float64       Float64 

           σ     0.0281    0.0240    0.0048     9.0598    12.2549    1.1381        0.0106
          c2   115.8862    0.0889    0.0266     3.5583    16.5640    1.4919        0.0042
          c3     1.5218    0.0390    0.0187     3.1257    34.2529    1.5938        0.0037
          c4     5.1490    0.0536    0.0059    11.4974    42.2204    1.1077        0.0134
          c5    16.7553    0.1769    0.0952     2.9830    31.6336    1.6609        0.0035
          c6     0.0426    0.0006    0.0003     3.3771    40.7817    1.4960        0.0039

Quantiles
  parameters       2.5%      25.0%      50.0%      75.0%      97.5% 
      Symbol    Float64    Float64    Float64    Float64    Float64 

           σ     0.0240     0.0243     0.0244     0.0245     0.0606
          c2   115.7811   115.8516   115.8828   115.9416   115.9978
          c3     1.4737     1.5035     1.5216     1.5494     1.5768
          c4     5.0835     5.1330     5.1486     5.1713     5.2165
          c5    16.4814    16.6593    16.7528    16.8896    17.0366
          c6     0.0417     0.0422     0.0425     0.0431     0.0436

Thank you @dlakelan and @Red-Portal for the responses. I’ve been incapable of making sample work when I put all the constants in a struct and pass it to the functions, so I gave up and just declared them as const as can be seen in the new version of the code. Hopefully that is ok.

Regarding the stiffness issue, I don’t think there’s much more I can do with the priors, they seem quite narrow to me. But indeed, changing parameters c2, c3, c4, c5, c6 has a considerable effect in the system output, and certain combinations of these values can make the system unstable.
Probably a stupid question due to my lack of understanding of what’s going on, but I’m really confused about why sample expects 6 parameters when I’m only estimating 5. At first, I thought you might have adjusted the code to also estimate c1, but when I ran my code with just the 5 parameters, I encountered the following error:

BoundsError: attempt to access 5-element Vector{Float64} at index [6:6]

Stacktrace:
  [1] throw_boundserror(A::Vector{Float64}, I::Tuple{UnitRange{Int64}})
    @ Base .\abstractarray.jl:737
  [2] checkbounds
    @ .\abstractarray.jl:702 [inlined]
  [3] getindex
    @ .\array.jl:973 [inlined]
  [4] macro expansion
    @ C:\Users\FX03NI\.julia\packages\DynamicPPL\E4kDs\src\varinfo.jl:0 [inlined]
  [5] newmetadata(metadata::@NamedTuple{c2::DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:c2, typeof(identity)}, Int64}, Vector{Truncated{Normal{Float64}, Continuous, Float64, Float64, Float64}}, Vector{AbstractPPL.VarName{:c2, typeof(identity)}}, Vector{Float64}, Vector{Set{DynamicPPL.Selector}}}, c3::DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:c3, typeof(identity)}, Int64}, Vector{Truncated{Normal{Float64}, Continuous, Float64, Float64, Float64}}, Vector{AbstractPPL.VarName{:c3, typeof(identity)}}, Vector{Float64}, Vector{Set{DynamicPPL.Selector}}}, c4::DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:c4, typeof(identity)}, Int64}, Vector{Truncated{Normal{Float64}, Continuous, Float64, Float64, Float64}}, Vector{AbstractPPL.VarName{:c4, typeof(identity)}}, Vector{Float64}, Vector{Set{DynamicPPL.Selector}}}, c5::DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:c5, typeof(identity)}, Int64}, Vector{Truncated{Normal{Float64}, Continuous, Float64, Float64, Float64}}, Vector{AbstractPPL.VarName{:c5, typeof(identity)}}, Vector{Float64}, Vector{Set{DynamicPPL.Selector}}}, c6::DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:c6, typeof(identity)}, Int64}, Vector{Truncated{Normal{Float64}, Continuous, Float64, Float64, Float64}}, Vector{AbstractPPL.VarName{:c6, typeof(identity)}}, Vector{Float64}, Vector{Set{DynamicPPL.Selector}}}, σ::DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:σ, typeof(identity)}, Int64}, Vector{Truncated{Normal{Float64}, Continuous, Float64, Float64, Float64}}, Vector{AbstractPPL.VarName{:σ, typeof(identity)}}, Vector{Float64}, Vector{Set{DynamicPPL.Selector}}}}, ::Val{()}, x::Vector{Float64})
    @ DynamicPPL C:\Users\FX03NI\.julia\packages\DynamicPPL\E4kDs\src\varinfo.jl:158
  [6] VarInfo
    @ C:\Users\FX03NI\.julia\packages\DynamicPPL\E4kDs\src\varinfo.jl:116 [inlined]
  [7] unflatten
    @ C:\Users\FX03NI\.julia\packages\DynamicPPL\E4kDs\src\varinfo.jl:137 [inlined]
  [8] unflatten(vi::DynamicPPL.TypedVarInfo{@NamedTuple{c2::DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:c2, typeof(identity)}, Int64}, Vector{Truncated{Normal{Float64}, Continuous, Float64, Float64, Float64}}, Vector{AbstractPPL.VarName{:c2, typeof(identity)}}, Vector{Float64}, Vector{Set{DynamicPPL.Selector}}}, c3::DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:c3, typeof(identity)}, Int64}, Vector{Truncated{Normal{Float64}, Continuous, Float64, Float64, Float64}}, Vector{AbstractPPL.VarName{:c3, typeof(identity)}}, Vector{Float64}, Vector{Set{DynamicPPL.Selector}}}, c4::DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:c4, typeof(identity)}, Int64}, Vector{Truncated{Normal{Float64}, Continuous, Float64, Float64, Float64}}, Vector{AbstractPPL.VarName{:c4, typeof(identity)}}, Vector{Float64}, Vector{Set{DynamicPPL.Selector}}}, c5::DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:c5, typeof(identity)}, Int64}, Vector{Truncated{Normal{Float64}, Continuous, Float64, Float64, Float64}}, Vector{AbstractPPL.VarName{:c5, typeof(identity)}}, Vector{Float64}, Vector{Set{DynamicPPL.Selector}}}, c6::DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:c6, typeof(identity)}, Int64}, Vector{Truncated{Normal{Float64}, Continuous, Float64, Float64, Float64}}, Vector{AbstractPPL.VarName{:c6, typeof(identity)}}, Vector{Float64}, Vector{Set{DynamicPPL.Selector}}}, σ::DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:σ, typeof(identity)}, Int64}, Vector{Truncated{Normal{Float64}, Continuous, Float64, Float64, Float64}}, Vector{AbstractPPL.VarName{:σ, typeof(identity)}}, Vector{Float64}, Vector{Set{DynamicPPL.Selector}}}}, Float64}, x::Vector{Float64})
    @ DynamicPPL C:\Users\FX03NI\.julia\packages\DynamicPPL\E4kDs\src\varinfo.jl:134
  [9] step(rng::TaskLocalRNG, model::DynamicPPL.Model{typeof(fit_wt), (:data, :prob_template), (), (), Tuple{Adjoint{Float64, Matrix{Float64}}, ODEProblem{Vector{Float64}, Tuple{Float64, Float64}, true, Vector{Float64}, ODEFunction{true, SciMLBase.AutoSpecialize, typeof(wind_turbine), UniformScaling{Bool}, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, typeof(SciMLBase.DEFAULT_OBSERVED), Nothing, Nothing, Nothing, Nothing}, @Kwargs{}, SciMLBase.StandardODEProblem}}, Tuple{}, DynamicPPL.DefaultContext}, sampler_wrapper::DynamicPPL.Sampler{Turing.Inference.ExternalSampler{HitAndRun{SliceSteppingOut{Float64}}, AutoForwardDiff{nothing, Nothing}, true}}; initial_state::Nothing, initial_params::Vector{Float64}, kwargs::@Kwargs{})
    @ Turing.Inference C:\Users\FX03NI\.julia\packages\Turing\cH3wV\src\mcmc\abstractmcmc.jl:56
 [10] step
    @ C:\Users\FX03NI\.julia\packages\Turing\cH3wV\src\mcmc\abstractmcmc.jl:36 [inlined]
 [11] macro expansion
    @ C:\Users\FX03NI\.julia\packages\AbstractMCMC\YrmkI\src\sample.jl:130 [inlined]
 [12] macro expansion
    @ C:\Users\FX03NI\.julia\packages\ProgressLogging\6KXlp\src\ProgressLogging.jl:328 [inlined]
 [13] (::AbstractMCMC.var"#22#23"{Bool, String, Nothing, Int64, Int64, Nothing, @Kwargs{initial_params::Vector{Float64}}, TaskLocalRNG, DynamicPPL.Model{typeof(fit_wt), (:data, :prob_template), (), (), Tuple{Adjoint{Float64, Matrix{Float64}}, ODEProblem{Vector{Float64}, Tuple{Float64, Float64}, true, Vector{Float64}, ODEFunction{true, SciMLBase.AutoSpecialize, typeof(wind_turbine), UniformScaling{Bool}, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, typeof(SciMLBase.DEFAULT_OBSERVED), Nothing, Nothing, Nothing, Nothing}, @Kwargs{}, SciMLBase.StandardODEProblem}}, Tuple{}, DynamicPPL.DefaultContext}, DynamicPPL.Sampler{Turing.Inference.ExternalSampler{HitAndRun{SliceSteppingOut{Float64}}, AutoForwardDiff{nothing, Nothing}, true}}, Int64, Int64})()
    @ AbstractMCMC C:\Users\FX03NI\.julia\packages\AbstractMCMC\YrmkI\src\logging.jl:12
 [14] with_logstate(f::Function, logstate::Any)
    @ Base.CoreLogging .\logging.jl:515
 [15] with_logger
    @ .\logging.jl:627 [inlined]
 [16] with_progresslogger(f::Function, _module::Module, logger::Logging.ConsoleLogger)
    @ AbstractMCMC C:\Users\FX03NI\.julia\packages\AbstractMCMC\YrmkI\src\logging.jl:36
 [17] macro expansion
    @ C:\Users\FX03NI\.julia\packages\AbstractMCMC\YrmkI\src\logging.jl:11 [inlined]
 [18] mcmcsample(rng::TaskLocalRNG, model::DynamicPPL.Model{typeof(fit_wt), (:data, :prob_template), (), (), Tuple{Adjoint{Float64, Matrix{Float64}}, ODEProblem{Vector{Float64}, Tuple{Float64, Float64}, true, Vector{Float64}, ODEFunction{true, SciMLBase.AutoSpecialize, typeof(wind_turbine), UniformScaling{Bool}, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, typeof(SciMLBase.DEFAULT_OBSERVED), Nothing, Nothing, Nothing, Nothing}, @Kwargs{}, SciMLBase.StandardODEProblem}}, Tuple{}, DynamicPPL.DefaultContext}, sampler::DynamicPPL.Sampler{Turing.Inference.ExternalSampler{HitAndRun{SliceSteppingOut{Float64}}, AutoForwardDiff{nothing, Nothing}, true}}, N::Int64; progress::Bool, progressname::String, callback::Nothing, discard_initial::Int64, thinning::Int64, chain_type::Type, initial_state::Nothing, kwargs::@Kwargs{initial_params::Vector{Float64}})
    @ AbstractMCMC C:\Users\FX03NI\.julia\packages\AbstractMCMC\YrmkI\src\sample.jl:120
 [19] sample(rng::TaskLocalRNG, model::DynamicPPL.Model{typeof(fit_wt), (:data, :prob_template), (), (), Tuple{Adjoint{Float64, Matrix{Float64}}, ODEProblem{Vector{Float64}, Tuple{Float64, Float64}, true, Vector{Float64}, ODEFunction{true, SciMLBase.AutoSpecialize, typeof(wind_turbine), UniformScaling{Bool}, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, typeof(SciMLBase.DEFAULT_OBSERVED), Nothing, Nothing, Nothing, Nothing}, @Kwargs{}, SciMLBase.StandardODEProblem}}, Tuple{}, DynamicPPL.DefaultContext}, sampler::DynamicPPL.Sampler{Turing.Inference.ExternalSampler{HitAndRun{SliceSteppingOut{Float64}}, AutoForwardDiff{nothing, Nothing}, true}}, N::Int64; chain_type::Type, resume_from::Nothing, initial_state::Nothing, kwargs::@Kwargs{initial_params::Vector{Float64}})
    @ DynamicPPL C:\Users\FX03NI\.julia\packages\DynamicPPL\E4kDs\src\sampler.jl:93
 [20] sample
    @ C:\Users\FX03NI\.julia\packages\DynamicPPL\E4kDs\src\sampler.jl:83 [inlined]
 [21] #sample#4
    @ C:\Users\FX03NI\.julia\packages\Turing\cH3wV\src\mcmc\Inference.jl:263 [inlined]
 [22] sample
    @ C:\Users\FX03NI\.julia\packages\Turing\cH3wV\src\mcmc\Inference.jl:256 [inlined]
 [23] #sample#3
    @ C:\Users\FX03NI\.julia\packages\Turing\cH3wV\src\mcmc\Inference.jl:253 [inlined]
 [24] top-level scope
    @ In[16]:2

So I’ve been unable to replicate those results for now.
This is my current code:

using Turing
using DifferentialEquations
using StatsPlots
using LinearAlgebra
using XLSX
using Random
using DataInterpolations
using NLsolve

Random.seed!(14)

# Load data from Excel sheet
filename = "...\\data_14ms.xlsx" 
data = XLSX.readxlsx(filename)["Ark1"]  # The data is in Ark1 "Sheet1"
wr_data = data["A"][1:4000]  # Column A contains angular speed data (rad/s)
Tg_data = data["B"][1:4000]  # Column B contains generator torque data (Nm)
#β_data = data["C"]  # Column C contains pitch angle data (rad)
β_data = data["F"][1:4000]  # Column F contains pitch angle data (degrees)
vr_data = data["D"][1:4000]  # Column D contains wind speed data (m/s)

# Convert to Float64 (needed for interpolation)
wr_data = map(Float64, wr_data)
Tg_data = map(Float64, Tg_data)
β_data = map(Float64, β_data)
vr_data = map(Float64, vr_data)
all_data = hcat(wr_data, Tg_data, β_data)'

tspan = (30.0, 229.95) # 2000->129.95, 4000->229.95, 12000 -> 629.95
t_data = range(tspan[1], tspan[2], length=4000)  # Time points

# Interpolation exogenous input
vr_interp = BSplineInterpolation(vr_data[1:end], t_data[1:end], 2, :ArcLen, :Average)

const μd = 0.05
const Rr = 120.998
const ρ = 1.225
const Ar = π * Rr^2
const Jr = 321699000
const Jg = 3.777e6
const c1_init = 0.1661
const c2_init = 115.98
const c3_init = 1.4138
const c4_init = 5.6336
const c5_init = 16.2881
const c6_init = 0.0410
const tau_Tg = 0.05
const tau_β = 0.01
const wr_rated = 0.7917
const Kp_Tg = 2.2476e7
const Ki_Tg = 0
const Tg_off = 2.05e7
const Kp_β = 30.2670
const Ki_β = 7.9968
const vr_eq = 14

# Function to compute equilibrium point
function f(x)
    [((1 - μd) * 1 / (2 * x[1]) * ρ * Ar * vr_eq^3 * (c1_init * (c2_init * (1 / (x[1] * Rr / vr_eq + 0.08 * x[3]) - 0.035 / (x[3]^3 + 1)) - c3_init * x[3] - c4_init) * exp(-c5_init * (1 / (x[1] * Rr / vr_eq + 0.08 * x[3]) - 0.035 / (x[3]^3 + 1))) + c6_init * x[1] * Rr / vr_eq) - x[2]) / (Jr + Jg),
        (-Kp_Tg * (x[1] - wr_rated) + Tg_off - x[2]) / tau_Tg,
        (Kp_β * (x[1] - wr_rated) + Ki_β * x[4] - x[3]) / tau_β,
        x[1] - wr_rated]
end

op = nlsolve(f, [0.79, 2.05e7, 9, 0])
wr_op, Tg_op, β_op, wr_err_op = op.zero

# Scaling parameters based on equilibrium
const wr_scale = wr_op
const Tg_scale = Tg_op
const β_scale = β_op
const wr_err_scale = wr_err_op

# Scale data
wr_data_scaled = wr_data / wr_scale
Tg_data_scaled = Tg_data / Tg_scale
β_data_scaled = β_data / β_scale
all_data_scaled = hcat(wr_data_scaled, Tg_data_scaled, β_data_scaled)'

# Define wind turbine model function
function wind_turbine(du, u, p, t)

    # Parameters
    c2, c3, c4, c5, c6 = p
    
    # States (relative)
    wr = u[1]
    Tg = u[2]
    β = u[3]
    wr_err = u[4]

    # Absolute states
    wr_abs = wr * wr_scale
    Tg_abs = Tg * Tg_scale
    β_abs = β * β_scale
    wr_err_abs = wr_err * wr_err_scale

    # Control signals
    Tg_ref_abs = -Kp_Tg * (wr_abs - wr_rated) + Tg_off
    β_ref_abs = Kp_β * (wr_abs - wr_rated) + Ki_β * wr_err_abs

    # Tip-speed ratio λ
    λ = wr_abs * Rr / vr_interp(t)

    # Inverse λi and Cp calculation
    λi_inv = 1 / (λ + 0.08 * β_abs) - 0.035 / (β_abs^3 + 1)
    Cp = c1_init * (c2 * λi_inv - c3 * β_abs - c4) * exp(-c5 * λi_inv) + c6 * λ

    # Aerodynamic torque Tr
    Tr = 1 / (2 * wr_abs) * ρ * Ar * vr_interp(t)^3 * Cp

    # State derivatives
    du[1] = ((1 - μd) * Tr - Tg_abs) / (Jr + Jg) / wr_scale
    du[2] = (Tg_ref_abs - Tg_abs) / tau_Tg / Tg_scale
    du[3] = (β_ref_abs - β_abs) / tau_β / β_scale
    du[4] = (wr_abs - wr_rated) / wr_err_scale
end

# Define initial conditions
wr0_scaled = wr_data_scaled[1]
Tg0_scaled = Tg_data_scaled[1]
β0_scaled = β_data_scaled[1]
wr_err0_scaled = wr0_scaled
u0_scaled = [wr0_scaled, Tg0_scaled, β0_scaled, wr_err0_scaled]

# Initial parameters
p_init = [c2_init, c3_init, c4_init, c5_init, c6_init]

# Define the ODE problem
prob_template = ODEProblem(wind_turbine, u0_scaled, tspan, p_init)

dt = 0.05  # Sampling time
sol_init = solve(prob_template, Tsit5(), saveat=dt) #AutoTsit5(Rosenbrock23())
t_sim = sol_init.t
u_sim = sol_init.u

wr_sim = [inner_vector[1] for inner_vector in u_sim]
Tg_sim = [inner_vector[2] for inner_vector in u_sim]
β_sim = [inner_vector[3] for inner_vector in u_sim]
wr_err_sim = [inner_vector[4] for inner_vector in u_sim]

plot(t_sim, wr_sim, xlabel="Time", ylabel="Rotor Speed", label="Rotor speed (model)")
plot!(t_data, wr_data_scaled, label="Rotor speed (data)")
plot!(legend=:topright)

plot(t_sim, Tg_sim, xlabel="Time", ylabel="Generator torque", label="Generator torque (model)")
plot!(t_data, Tg_data_scaled, label="Generator torque (data)")
plot!(legend=:topright)

plot(t_sim, β_sim, xlabel="Time", ylabel="Pitch angle", label="Pitch angle (model)")
plot!(t_data, β_data_scaled, label="Pitch angle (data)")
plot!(legend=:topright)

# Bayesian model function using Turing.jl
@model function fit_wt(data, prob_template)

    # Priors for the parameters to be inferred
    c2 ~ truncated(Normal(115.98, 1); lower=113, upper=118)
    c3 ~ truncated(Normal(1.4138, 0.5); lower=0.1, upper=3)
    c4 ~ truncated(Normal(5.6336, 0.5); lower=3, upper=7)
    c5 ~ truncated(Normal(16.2881, 2); lower=12, upper=20)
    c6 ~ truncated(Normal(0.04, 0.02); lower=0.01, upper=0.08)

    # Standard deviation
    σ ~ truncated(Normal(0, 0.03); lower=0, upper=Inf)

    # Parameters
    p = [c2, c3, c4, c5, c6]

    # Solve the ODE with the current parameters
    predicted = solve(prob_template, Tsit5(); p=p, saveat=0.05, save_idxs=[1, 2, 3])

    # Observations
    for i in 1:length(predicted)
        data[:, i] ~ MvNormal(predicted[i], σ^2 * I)
    end
end

# Define the Bayesian inference model
model = fit_wt(all_data_scaled, prob_template)

using SliceSampling
# Run the Bayesian inference model
chain = sample(model, externalsampler(HitAndRun(SliceSteppingOut(0.1; max_stepping_out=10, max_proposals=10000))), 1000; initial_params=p_init)

It’s c2, c3, c4, c5, c6, and σ. Therefore 6 no?

This is actually not about the narrowness of the prior, but where it puts most of the mass. However narrow it is, if it is centered around a stiff set of parameters, the equation will be stiff no matter what.

Damn of course, I forgot about σ, thank you.

I see, but I initialized the chain using parameters derived from a least-squares estimation (so I’m cheating). These parameters, c2_init, c3_init, c4_init, c5_init, c6_init in the code, produce state trajectories as nice as they can get, so they seem like the optimal initialization. I also set the priors as truncated normal distributions centered around these values. Given this setup, I don’t know how to improve the priors further. Using this, the following last line:

using SliceSampling
# Run the Bayesian inference model
chain = sample(model, externalsampler(HitAndRun(SliceSteppingOut(0.1; max_stepping_out=10, max_proposals=10000))), 1000; initial_params=[0.03, c2_init, c3_init, c4_init, c5_init, c5_init], progress=true)

along with the code that I attached in my previous response gives me the following error:

Exceeded maximum number of proposal 10000.
Here are possible causes:
- The model might be broken or pathologic.
- There might be a bug in the sampler.

Stacktrace:
  [1] error(::String, ::String, ::String, ::String)
    @ Base .\error.jl:44
  [2] exceeded_max_prop(max_prop::Int64)
    @ SliceSampling C:\Users\FX03NI\.julia\packages\SliceSampling\yzbyK\src\SliceSampling.jl:70
  [3] slice_sampling_univariate(rng::TaskLocalRNG, alg::SliceSteppingOut{Float64}, model::SliceSampling.HitAndRunTarget{LogDensityProblemsADForwardDiffExt.ForwardDiffLogDensity{LogDensityFunction{DynamicPPL.TypedVarInfo{@NamedTuple{σ::DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:σ, typeof(identity)}, Int64}, Vector{Truncated{Normal{Float64}, Continuous, Float64, Float64, Float64}}, Vector{AbstractPPL.VarName{:σ, typeof(identity)}}, Vector{Float64}, Vector{Set{DynamicPPL.Selector}}}, c2::DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:c2, typeof(identity)}, Int64}, Vector{Truncated{Normal{Float64}, Continuous, Float64, Float64, Float64}}, Vector{AbstractPPL.VarName{:c2, typeof(identity)}}, Vector{Float64}, Vector{Set{DynamicPPL.Selector}}}, c3::DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:c3, typeof(identity)}, Int64}, Vector{Truncated{Normal{Float64}, Continuous, Float64, Float64, Float64}}, Vector{AbstractPPL.VarName{:c3, typeof(identity)}}, Vector{Float64}, Vector{Set{DynamicPPL.Selector}}}, c4::DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:c4, typeof(identity)}, Int64}, Vector{Truncated{Normal{Float64}, Continuous, Float64, Float64, Float64}}, Vector{AbstractPPL.VarName{:c4, typeof(identity)}}, Vector{Float64}, Vector{Set{DynamicPPL.Selector}}}, c5::DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:c5, typeof(identity)}, Int64}, Vector{Truncated{Normal{Float64}, Continuous, Float64, Float64, Float64}}, Vector{AbstractPPL.VarName{:c5, typeof(identity)}}, Vector{Float64}, Vector{Set{DynamicPPL.Selector}}}, c6::DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:c6, typeof(identity)}, Int64}, Vector{Truncated{Normal{Float64}, Continuous, Float64, Float64, Float64}}, Vector{AbstractPPL.VarName{:c6, typeof(identity)}}, Vector{Float64}, Vector{Set{DynamicPPL.Selector}}}}, Float64}, DynamicPPL.Model{typeof(fit_wt), (:data, :prob_template), (), (), Tuple{Adjoint{Float64, Matrix{Float64}}, ODEProblem{Vector{Float64}, Tuple{Float64, Float64}, true, Vector{Float64}, ODEFunction{true, SciMLBase.AutoSpecialize, typeof(wind_turbine), UniformScaling{Bool}, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, typeof(SciMLBase.DEFAULT_OBSERVED), Nothing, Nothing, Nothing, Nothing}, @Kwargs{}, SciMLBase.StandardODEProblem}}, Tuple{}, DynamicPPL.DefaultContext}, DynamicPPL.DefaultContext}, ForwardDiff.Chunk{6}, ForwardDiff.Tag{DynamicPPL.DynamicPPLTag, Float64}, ForwardDiff.GradientConfig{ForwardDiff.Tag{DynamicPPL.DynamicPPLTag, Float64}, Float64, 6, Vector{ForwardDiff.Dual{ForwardDiff.Tag{DynamicPPL.DynamicPPLTag, Float64}, Float64, 6}}}}, Vector{Float64}}, ℓπ::Float64, θ::Float64)
    @ SliceSampling C:\Users\FX03NI\.julia\packages\SliceSampling\yzbyK\src\univariate\univariate.jl:28
  [4] step(rng::TaskLocalRNG, model::AbstractMCMC.LogDensityModel{LogDensityProblemsADForwardDiffExt.ForwardDiffLogDensity{LogDensityFunction{DynamicPPL.TypedVarInfo{@NamedTuple{σ::DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:σ, typeof(identity)}, Int64}, Vector{Truncated{Normal{Float64}, Continuous, Float64, Float64, Float64}}, Vector{AbstractPPL.VarName{:σ, typeof(identity)}}, Vector{Float64}, Vector{Set{DynamicPPL.Selector}}}, c2::DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:c2, typeof(identity)}, Int64}, Vector{Truncated{Normal{Float64}, Continuous, Float64, Float64, Float64}}, Vector{AbstractPPL.VarName{:c2, typeof(identity)}}, Vector{Float64}, Vector{Set{DynamicPPL.Selector}}}, c3::DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:c3, typeof(identity)}, Int64}, Vector{Truncated{Normal{Float64}, Continuous, Float64, Float64, Float64}}, Vector{AbstractPPL.VarName{:c3, typeof(identity)}}, Vector{Float64}, Vector{Set{DynamicPPL.Selector}}}, c4::DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:c4, typeof(identity)}, Int64}, Vector{Truncated{Normal{Float64}, Continuous, Float64, Float64, Float64}}, Vector{AbstractPPL.VarName{:c4, typeof(identity)}}, Vector{Float64}, Vector{Set{DynamicPPL.Selector}}}, c5::DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:c5, typeof(identity)}, Int64}, Vector{Truncated{Normal{Float64}, Continuous, Float64, Float64, Float64}}, Vector{AbstractPPL.VarName{:c5, typeof(identity)}}, Vector{Float64}, Vector{Set{DynamicPPL.Selector}}}, c6::DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:c6, typeof(identity)}, Int64}, Vector{Truncated{Normal{Float64}, Continuous, Float64, Float64, Float64}}, Vector{AbstractPPL.VarName{:c6, typeof(identity)}}, Vector{Float64}, Vector{Set{DynamicPPL.Selector}}}}, Float64}, DynamicPPL.Model{typeof(fit_wt), (:data, :prob_template), (), (), Tuple{Adjoint{Float64, Matrix{Float64}}, ODEProblem{Vector{Float64}, Tuple{Float64, Float64}, true, Vector{Float64}, ODEFunction{true, SciMLBase.AutoSpecialize, typeof(wind_turbine), UniformScaling{Bool}, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, typeof(SciMLBase.DEFAULT_OBSERVED), Nothing, Nothing, Nothing, Nothing}, @Kwargs{}, SciMLBase.StandardODEProblem}}, Tuple{}, DynamicPPL.DefaultContext}, DynamicPPL.DefaultContext}, ForwardDiff.Chunk{6}, ForwardDiff.Tag{DynamicPPL.DynamicPPLTag, Float64}, ForwardDiff.GradientConfig{ForwardDiff.Tag{DynamicPPL.DynamicPPLTag, Float64}, Float64, 6, Vector{ForwardDiff.Dual{ForwardDiff.Tag{DynamicPPL.DynamicPPLTag, Float64}, Float64, 6}}}}}, sampler::HitAndRun{SliceSteppingOut{Float64}}, state::SliceSampling.HitAndRunState{SliceSampling.Transition{Vector{Float64}, Float64, @NamedTuple{}}}; kwargs::@Kwargs{initial_params::Vector{Float64}})
    @ SliceSampling C:\Users\FX03NI\.julia\packages\SliceSampling\yzbyK\src\multivariate\hitandrun.jl:68
  [5] step
    @ C:\Users\FX03NI\.julia\packages\SliceSampling\yzbyK\src\multivariate\hitandrun.jl:52 [inlined]
  [6] #step#108
    @ C:\Users\FX03NI\.julia\packages\Turing\cH3wV\src\mcmc\abstractmcmc.jl:90 [inlined]
  [7] macro expansion
    @ C:\Users\FX03NI\.julia\packages\AbstractMCMC\YrmkI\src\sample.jl:176 [inlined]
  [8] macro expansion
    @ C:\Users\FX03NI\.julia\packages\ProgressLogging\6KXlp\src\ProgressLogging.jl:328 [inlined]
  [9] (::AbstractMCMC.var"#22#23"{Bool, String, Nothing, Int64, Int64, Nothing, @Kwargs{initial_params::Vector{Float64}}, TaskLocalRNG, DynamicPPL.Model{typeof(fit_wt), (:data, :prob_template), (), (), Tuple{Adjoint{Float64, Matrix{Float64}}, ODEProblem{Vector{Float64}, Tuple{Float64, Float64}, true, Vector{Float64}, ODEFunction{true, SciMLBase.AutoSpecialize, typeof(wind_turbine), UniformScaling{Bool}, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, typeof(SciMLBase.DEFAULT_OBSERVED), Nothing, Nothing, Nothing, Nothing}, @Kwargs{}, SciMLBase.StandardODEProblem}}, Tuple{}, DynamicPPL.DefaultContext}, DynamicPPL.Sampler{Turing.Inference.ExternalSampler{HitAndRun{SliceSteppingOut{Float64}}, AutoForwardDiff{nothing, Nothing}, true}}, Int64, Int64})()
    @ AbstractMCMC C:\Users\FX03NI\.julia\packages\AbstractMCMC\YrmkI\src\logging.jl:12
 [10] with_logstate(f::Function, logstate::Any)
    @ Base.CoreLogging .\logging.jl:515
 [11] with_logger
    @ .\logging.jl:627 [inlined]
 [12] with_progresslogger(f::Function, _module::Module, logger::Logging.ConsoleLogger)
    @ AbstractMCMC C:\Users\FX03NI\.julia\packages\AbstractMCMC\YrmkI\src\logging.jl:36
 [13] macro expansion
    @ C:\Users\FX03NI\.julia\packages\AbstractMCMC\YrmkI\src\logging.jl:11 [inlined]
 [14] mcmcsample(rng::TaskLocalRNG, model::DynamicPPL.Model{typeof(fit_wt), (:data, :prob_template), (), (), Tuple{Adjoint{Float64, Matrix{Float64}}, ODEProblem{Vector{Float64}, Tuple{Float64, Float64}, true, Vector{Float64}, ODEFunction{true, SciMLBase.AutoSpecialize, typeof(wind_turbine), UniformScaling{Bool}, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, typeof(SciMLBase.DEFAULT_OBSERVED), Nothing, Nothing, Nothing, Nothing}, @Kwargs{}, SciMLBase.StandardODEProblem}}, Tuple{}, DynamicPPL.DefaultContext}, sampler::DynamicPPL.Sampler{Turing.Inference.ExternalSampler{HitAndRun{SliceSteppingOut{Float64}}, AutoForwardDiff{nothing, Nothing}, true}}, N::Int64; progress::Bool, progressname::String, callback::Nothing, discard_initial::Int64, thinning::Int64, chain_type::Type, initial_state::Nothing, kwargs::@Kwargs{initial_params::Vector{Float64}})
    @ AbstractMCMC C:\Users\FX03NI\.julia\packages\AbstractMCMC\YrmkI\src\sample.jl:120
 [15] sample(rng::TaskLocalRNG, model::DynamicPPL.Model{typeof(fit_wt), (:data, :prob_template), (), (), Tuple{Adjoint{Float64, Matrix{Float64}}, ODEProblem{Vector{Float64}, Tuple{Float64, Float64}, true, Vector{Float64}, ODEFunction{true, SciMLBase.AutoSpecialize, typeof(wind_turbine), UniformScaling{Bool}, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, typeof(SciMLBase.DEFAULT_OBSERVED), Nothing, Nothing, Nothing, Nothing}, @Kwargs{}, SciMLBase.StandardODEProblem}}, Tuple{}, DynamicPPL.DefaultContext}, sampler::DynamicPPL.Sampler{Turing.Inference.ExternalSampler{HitAndRun{SliceSteppingOut{Float64}}, AutoForwardDiff{nothing, Nothing}, true}}, N::Int64; chain_type::Type, resume_from::Nothing, initial_state::Nothing, kwargs::@Kwargs{initial_params::Vector{Float64}, progress::Bool})
    @ DynamicPPL C:\Users\FX03NI\.julia\packages\DynamicPPL\E4kDs\src\sampler.jl:93
 [16] sample
    @ C:\Users\FX03NI\.julia\packages\DynamicPPL\E4kDs\src\sampler.jl:83 [inlined]
 [17] #sample#4
    @ C:\Users\FX03NI\.julia\packages\Turing\cH3wV\src\mcmc\Inference.jl:263 [inlined]
 [18] sample
    @ C:\Users\FX03NI\.julia\packages\Turing\cH3wV\src\mcmc\Inference.jl:256 [inlined]
 [19] #sample#3
    @ C:\Users\FX03NI\.julia\packages\Turing\cH3wV\src\mcmc\Inference.jl:253 [inlined]
 [20] top-level scope
    @ In[15]:2

So I couldn’t get the results and performance you showed in your response :sweat_smile:

Okay first of all, for the error in SliceSampling, you can increase max_proposals. That parameter is just setting an upper bound on the number of rejections allowed. This is necessary because on challenging models like the one here, slice sampling can get stuck. The fact that you are getting more than 10000 rejections is reflecting the fact that there is something wrong with the model.

Could you perhaps try other integrators and see if they work better?