2D stationary diffusion problem & (?) MethodOfLines

I’d like to model a mass exchanger which can be represented as diffusion in a fluid flow with given stationary flow field and given coordinate-dependent diffusion coefficient. 2D, rectangular domain, rectangular grid, steady state, all linear. Dirichlet bc on part of borders (inlets), Neumann otherwise.

MethodOfLines.jl apeared to be what I need, however the first attempt was unsuccessful, the example from the manual don’t work. I’ve reformulated the same toy problem as SteadyStateProblem, it appears to work, and now I have several questions.

Code
using DifferentialEquations, ModelingToolkit, MethodOfLines, DomainSets, Plots

@parameters t x y
@variables u(..)
Dt = Differential(t)
Dxx = Differential(x)^2
Dyy = Differential(y)^2

eq  = Dt(u(t, x, y)) ~ Dxx(u(t, x, y)) + Dyy(u(t, x, y))

bcs = [u(0, x, y) ~ x * y,
    u(t, 0, y) ~ x * y,
    u(t, 1, y) ~ x * y,
    u(t, x, 0) ~ x * y,
    u(t, x, 1) ~ x * y]

domains = [t ∈ Interval(0.0, 1.0),
    x ∈ Interval(0.0, 1.0),
    y ∈ Interval(0.0, 1.0)]

@named pdesys = PDESystem(eq, bcs, domains, [t, x, y], [u(t, x, y)])

g = 20 
dx = dy = 1/g

order = 2
discretization = MOLFiniteDifference([x => dx, y => dy], t)

t0 = @elapsed prob = discretize(pdesys,discretization)

steadystateprob = SteadyStateProblem(prob)

t1 = @elapsed sol = solve(steadystateprob, DynamicSS(Tsit5()))
t2 = @elapsed sol = solve(steadystateprob, DynamicSS(Tsit5()))

s = reshape(copy(sol.u), g-1, g-1)
heatmap(s)

The solution is returned as 1D array. It is not a problem to reshape it, but why?

The time to compute increases quite steep with grid density. Most of the time is spent in the discretize step, and solve appears to spend most time by compiling, not computing:
For the grid 100x100 the timings are

t0 = 925  # discretize
t1 = 181 # 1st solve
t2 = 0.358 # 2nd solve

The total time about 6 min for 75x75 and 18.5 min for 100x100: it looks like the time is proportional to the 4th power of grid side size (i.e. 2nd power of number of nodes).

My system is a bit more complex than the toy example above - should I expect much longer compute time?

I’d like to test the model with different parameter, e.g. different diffusion coefficients. Would it be possible to reuse discretization for some situations? My feeling, the grid like 100x100 or 50x200 should be (just) enough for realistic modelling in my case, but I have little experience, it could do with less.

One alternative would be to write out the finite differences by hand (a bit tedious). Still another possibility is some finite elements package (a lot to learn). Any more options? My purpose is to solve a specific technical problem with spending minimum time.

Just index the solution via the symbolic, i.e. sol[u], to get the shaped version. The reason is because if you have multiple dependent variables, this symbolic form will be able to generate each one and know the interpretation.

This is something being worked on by JuliaSimCompiler and symbolic array tooling.

Just do remake(prob, p = [myparameter => 1.0)) etc.

Right now manual ODE definition code for PDEs would give you faster compile time, though we’re working to remove that gap.

Finite element does not make that much sense if you have a square domains though.

1 Like

Chris, first thanks a lot for your support!

Now, following issues:

@parameters t x y
@variables u(..)
###
julia> sol[u]
ERROR: ArgumentError: invalid index: u⋆ of type Symbolics.CallWithMetadata{SymbolicUtils.FnType{Tuple, Real}, Base.ImmutableDict{DataType, Any}}

This one is actually just a minor inconvenience in my case.

I’ve first tried to run the Tutorial example, which errored - I’ve opened an issue.

Then tried to apply parameter to my code, with the same error.

My code

using DifferentialEquations, ModelingToolkit, MethodOfLines, DomainSets, Plots

@parameters t x y
@parameters Px, Py
@variables u(…)
Dt = Differential(t)
Dxx = Differential(x)^2
Dyy = Differential(y)^2

eq = Dt(u(t, x, y)) ~ Dxx(u(t, x, y)) * (1 - Px + Px * y^2) + Dyy(u(t, x, y)) * (1 - Py + Py * x^2)

bcs = [u(0, x, y) ~ 1,
u(t, 0, y) ~ 0,
u(t, 1, y) ~ 0,
u(t, x, 0) ~ x, # y = 0
u(t, x, 1) ~ 1-x, # y = 1
]

domains = [t ∈ Interval(0.0, 1.0),
x ∈ Interval(0.0, 1.0),
y ∈ Interval(0.0, 1.0)]

@named pdesys = PDESystem(eq, bcs, domains, [t, x, y], [u(t, x, y)], [Px=>0, Py=>0])

I hope it was some change in the syntax, not yet reflected in the manual, and not an actual bug.

sol[u(t,x,y)]

Something happened with parameters in the recent change. I need to handle that.

1 Like

For parameters, the Tutorial example is working with MethodOfLines@0.11.0 and erroring with MethodOfLines@0.11.1. I’ve commented in the issue.

So now I’ve downgraded to 11.0, and it works. I hope now I can start to work on the actual model :slight_smile:

julia> sol[u(t,x,y)]
ERROR: ArgumentError: u(t, x, y) is neither an observed nor an unknown variable.

The next problem: Interfaces (“You may want to connect regions with differing dynamics”)

Tried to run the Tutorial example. As the example contains a problem definition only, added the discretization and solving.

code
using DifferentialEquations, ModelingToolkit, MethodOfLines, DomainSets

@parameters t x1 x2
@variables c1(..)
@variables c2(..)
Dt = Differential(t)

Dx1 = Differential(x1)
Dxx1 = Dx1^2

Dx2 = Differential(x2)
Dxx2 = Dx2^2

D1(c) = 1 + c / 10
D2(c) = 1 / 10 + c / 10

eqs = [Dt(c1(t, x1)) ~ Dx1(D1(c1(t, x1)) * Dx1(c1(t, x1))),
    Dt(c2(t, x2)) ~ Dx2(D2(c2(t, x2)) * Dx2(c2(t, x2)))]

bcs = [c1(0, x1) ~ 1 + cospi(2 * x1),
    c2(0, x2) ~ 1 + cospi(2 * x2),
    Dx1(c1(t, 0)) ~ 0,
    c1(t, 0.5) ~ c2(t, 0.5), # Relevant interface boundary condition
    -D1(c1(t, 0.5)) * Dx1(c1(t, 0.5)) ~ -D2(c2(t, 0.5)) * Dx2(c2(t, 0.5)), # Higher order interface condition
    Dx2(c2(t, 1)) ~ 0]

domains = [t ∈ Interval(0.0, 0.15),
    x1 ∈ Interval(0.0, 0.5),
    x2 ∈ Interval(0.5, 1.0)]

@named pdesys = PDESystem(eqs, bcs, domains,
    [t, x1, x2], [c1(t, x1), c2(t, x2)])

# from this point my code
order = 2
discretization = MOLFiniteDifference([x1 => 0.05, x2 => 0.05], t)
;
prob = MethodOfLines.discretize(pdesys, discretization);
sol = solve(prob, NewtonRaphson(), saveat = 0.1);

Getting a warning

prob = MethodOfLines.discretize(pdesys, discretization);
 Warning: The system contains interface boundaries, which are not compatible with system transformation. The system will not be transformed. Please post an issue if you need this feature.

and then an error on calling solve.

ERROR: MethodError: anyeltypedual(::ArrayPartition{Union{}, Tuple{}}) is ambiguous.

MethodOfLines@0.11.0
Anything wrong with my code, or with the Tutorial example?

We need to update some tutorials. See the discussion in MethodOfLines complains about my model... · Issue #249 · SciML/MethodOfLines.jl · GitHub

Now updated to MethodOfLines@0.11.3 (i.e. yesterday’s release), adapted my code according to updated manual - it works now :smiling_face:

Also got the Tutorial example with an interface working (just had to change the algorithm in my code to Rosenbrock23).

Now trying to adapt that code with interface to my 2D problem. The code below describes a simplified case of a heat/mass exchanger with a counterflow, and semi-permeable wall in the middle.

Code counterflow exchanger
using DifferentialEquations, ModelingToolkit, MethodOfLines, DomainSets 

@parameters t x y1 y2
@variables c1(..)
@variables c2(..)

Dt = Differential(t)

Dx = Differential(x)
Dxx = Dx^2

Dy1 = Differential(y1)
Dyy1 = Dy1^2

Dy2 = Differential(y2)
Dyy2 = Dy2^2

li_vel(y) = 2 * (0.5 - y)

eq1  = Dt(c1(t, x, y1)) ~ Dxx(c1(t, x, y1)) + Dyy1(c1(t, x, y1)) - Dx(c1(t, x, y1))*li_vel(y1)
eq2  = Dt(c2(t, x, y2)) ~ Dxx(c2(t, x, y2)) + Dyy2(c2(t, x, y2)) - Dx(c2(t, x, y2))*li_vel(y2)

eqs = [eq1, eq2]

domains = [t ∈ Interval(0.0, 0.15),
    x ∈ Interval(0.0, 1.0),
    y1 ∈ Interval(0.0, 0.5),
    y2 ∈ Interval(0.5, 1.0)]

bcs = [
    c1(0, x, y1) ~ 0.5, # init
    c2(0, x, y2) ~ 0.5,
    c1(t, 0, y1) ~ 0, # flow in
    c2(t, 1, y2) ~ 1,
    Dx(c1(t, 1, y1)) ~ 0, # flow out
    Dx(c2(t, 0, y1)) ~ 0, 
    Dy1(c1(t, x, 0)) ~ 0, # bottom & top walls
    Dy2(c2(t, x, 1)) ~ 0,
    Dy1(c1(t, x, 0.5)) + c1(t, x, 0.5) ~ c2(t, x, 0.5), # membrane
    Dy2(c2(t, x, 0.5)) + c2(t, x, 0.5) ~ c1(t, x, 0.5), 
]

domains = [t ∈ Interval(0.0, 0.15),
    y1 ∈ Interval(0.0, 0.5),
    y2 ∈ Interval(0.5, 1.0)]

@named pdesys = PDESystem(eqs, bcs, domains,
    [t, x, y1, y2], [c1(t, x, y1), c2(t, x, y2)])

discretization = MOLFiniteDifference([x => 0.1, y1 => 0.1, y2 => 0.1], t)

prob = MethodOfLines.discretize(pdesys, discretization) 
# error on previous line
alg = Rosenbrock23() 
sol = solve(prob, alg, saveat = 0.1);

Unfortunately I get an error on MethodOfLines.discretize(pdesys, discretization):

ERROR: ArgumentError: invalid index: nothing of type Nothing
Stack trace
ERROR: ArgumentError: invalid index: nothing of type Nothing
Stacktrace:
  [1] to_index(i::Nothing)
    @ Base ./indices.jl:300
  [2] to_index(A::Vector{Symbolics.VarDomainPairing}, i::Nothing)
    @ Base ./indices.jl:277
  [3] _to_indices1(A::Vector{Symbolics.VarDomainPairing}, inds::Tuple{Base.OneTo{Int64}}, I1::Nothing)
    @ Base ./indices.jl:359
  [4] to_indices
    @ ./indices.jl:354 [inlined]
  [5] to_indices
    @ ./indices.jl:345 [inlined]
  [6] getindex
    @ ./abstractarray.jl:1291 [inlined]
  [7] (::PDEBase.var"#54#62"{Vector{Symbolics.VarDomainPairing}})(x::SymbolicUtils.BasicSymbolic{Real})
    @ PDEBase ~/.julia/packages/PDEBase/o7LGc/src/variable_map.jl:34
  [8] iterate
    @ ./generator.jl:47 [inlined]
  [9] collect_to!(dest::Vector{Pair{…}}, itr::Base.Generator{Vector{…}, PDEBase.var"#54#62"{…}}, offs::Int64, st::Int64)
    @ Base ./array.jl:892
 [10] collect_to_with_first!(dest::Vector{…}, v1::Pair{…}, itr::Base.Generator{…}, st::Int64)
    @ Base ./array.jl:870
 [11] _collect(c::Vector{…}, itr::Base.Generator{…}, ::Base.EltypeUnknown, isz::Base.HasShape{…})
    @ Base ./array.jl:864
 [12] collect_similar(cont::Vector{Any}, itr::Base.Generator{Vector{Any}, PDEBase.var"#54#62"{Vector{…}}})
    @ Base ./array.jl:763
 [13] map(f::Function, A::Vector{Any})
    @ Base ./abstractarray.jl:3285
 [14] PDEBase.VariableMap(pdesys::PDESystem, disc::MOLFiniteDifference{…}; replaced_vars::Dict{…})
    @ PDEBase ~/.julia/packages/PDEBase/o7LGc/src/variable_map.jl:33
 [15] symbolic_discretize(pdesys::PDESystem, discretization::MOLFiniteDifference{…})
    @ PDEBase ~/.julia/packages/PDEBase/o7LGc/src/symbolic_discretize.jl:17
 [16] discretize(pdesys::PDESystem, discretization::MOLFiniteDifference{…}; analytic::Nothing, kwargs::@Kwargs{})
    @ PDEBase ~/.julia/packages/PDEBase/o7LGc/src/discretization_state.jl:57
 [17] discretize(pdesys::PDESystem, discretization::MOLFiniteDifference{…})
    @ PDEBase ~/.julia/packages/PDEBase/o7LGc/src/discretization_state.jl:54
 [18] top-level scope
    @ ~/Julia/MembH2Otransport.jl/src/PDE/m20_2D_middle-interf_4discourse.jl:52
Some type information was truncated. Use `show(err)` to see complete types.

Any ideas?
The manual says

Note that if you want to use a higher order interface condition, this may not work if you have no simple condition of the form c1(t, 0.5) ~ c2(t, 0.5)

I do not have that kind of conditions. However if I add a simple condition,

bcs with a simple condition
bcs = [
    c1(0, x, y1) ~ 0.5, # init
    c2(0, x, y2) ~ 0.5,
    c1(t, 0, y1) ~ 0, # flow in
    c2(t, 1, y2) ~ 1,
    Dx(c1(t, 1, y1)) ~ 0, # flow out
    Dx(c2(t, 0, y1)) ~ 0, 
    Dy1(c1(t, x, 0)) ~ 0, # bottom & top walls
    Dy2(c2(t, x, 1)) ~ 0,
    Dy1(c1(t, x, 0.5)) + c1(t, x, 0.5) ~ c2(t, x, 0.5), # membrane
    # Dy2(c2(t, x, 0.5)) + c2(t, x, 0.5) ~ c1(t, x, 0.5), 
    c2(t, x, 0.5) ~ c1(t, x, 0.5), # simple condition
]

I still get the same error, so it is probably not the error cause, but the next problem.

Okay that’s an odd error :sweat_smile:. Open an issue on that.

Issue opened

1 Like

@ChrisRackauckas

Dear all,

I am new in the community and use this entry point due to very harsh restrictions as a newcomer (Th at-symbol is replaced by at). I see a huge benefit using DiffEqGPU for my applications (luxdem.uni.lu), in particular to use the power of GPUs of different architectures. My application comes from thermodynamics and requires to solve the one-dimensional and transient heat equation (diffusion type differential equation). I have already seen MoL, however, I am not able to use it due to conservation issues and working in different frames of references. However, that is not the issue. Thus, my ode system comes from the discretisation of the one-dimensional heat equation. My application requires to solve app. n=1e6 small and similar ode-systems coming from individual particles. Each small ode system representing a particle consisting of app 10 - 100 unknowns with individual boundary conditions. I am not clear if I should use
EnsembleGPUKernel for one big ODE system or EnsembleGPUArray to handle each small ODE system. From my understanding, EnsembleGPUKernel should be the method to be applied because

  • the function f to evaluate the system i.e. u and would be a loop over all small ode system for which transport properties are dependent on u

  • the boundary conditions for each small ode system come from the parameters defined in advance before solving

  • is able to account for interactions between the small ode systems

That would then require to run EnsembleGPUKernel for one trajectory. Trying this with the following example taken from the homepage and modified the trajectory number:

using DiffEqGPU, OrdinaryDiffEq, StaticArrays, CUDA

function lorenz(u, p, t)
σ = p[1]
ρ = p[2]
β = p[3]
du1 = σ * (u[2] - u[1])
du2 = u[1] * (ρ - u[3]) - u[2]
du3 = u[1] * u[2] - β * u[3]
return SVector{3}(du1, du2, du3)
end

u0 = _at_SVector [1.0f0; 0.0f0; 0.0f0]
tspan = (0.0f0, 10.0f0)
p = _at_SVector [10.0f0, 28.0f0, 8 / 3.0f0]
prob = ODEProblem{false}(lorenz, u0, tspan, p)
prob_func = (prob, i, repeat) → remake(prob, p = (_at_SVector rand(Float32, 3)) .* p)
monteprob = EnsembleProblem(prob, prob_func = prob_func, safetycopy = false)
sol = solve(monteprob, GPUTsit5(), EnsembleGPUKernel(CUDA.CUDABackend()), trajectories = 1)

gives me (very surprising):

julia> include(“runODETestsFloat32GPUEnsembleGPUKernel.jl”)
ERROR: LoadError: MethodError: Cannot convert an object of type GPUTsit5 to an object of type Vector{SVector{3, Float32}}
The function convert exists, but no method is defined for this combination of argument types.

Closest candidates are:
convert(::Type{Array{T, N}}, ::SizedArray{S, T, N, N, Array{T, N}}) where {S, T, N}
_at_ StaticArrays ~/.julia/packages/StaticArrays/DsPgf/src/SizedArray.jl:88
convert(::Type{Array{T, N}}, ::SizedArray{S, T, N, M, TData} where {M, TData<:AbstractArray{T, M}}) where {T, S, N}
_at_ StaticArrays ~/.julia/packages/StaticArrays/DsPgf/src/SizedArray.jl:82
convert(::Type{Array{S, N}}, ::PooledArrays.PooledArray{T, R, N}) where {S, T, R, N}
_at_ PooledArrays ~/.julia/packages/PooledArrays/Vy2X0/src/PooledArrays.jl:499

Stacktrace:
[1] __init(prob::ODEProblem{…}, alg::CompositeAlgorithm{…}, timeseries_init::GPUTsit5, ts_init::Tuple{}, ks_init::Tuple{}; saveat::Tuple{}, tstops::Tuple{}, d_discontinuities::Tuple{}, save_idxs::Nothing, save_everystep::Bool, save_on::Bool, save_discretes::Bool, save_start::Bool, save_end::Nothing, callback::Nothing, dense::Bool, calck::Bool, dt::Float32, dtmin::Float32, dtmax::Float32, force_dtmin::Bool, adaptive::Bool, gamma::Rational{…}, abstol::Nothing, reltol::Nothing, qmin::Rational{…}, qmax::Int64, qsteady_min::Int64, qsteady_max::Int64, beta1::Nothing, beta2::Nothing, qoldinit::Rational{…}, controller::Nothing, fullnormalize::Bool, failfactor::Int64, maxiters::Int64, internalnorm::typeof(DiffEqBase.ODE_DEFAULT_NORM), internalopnorm::typeof(LinearAlgebra.opnorm), isoutofdomain::typeof(DiffEqBase.ODE_DEFAULT_ISOUTOFDOMAIN), unstable_check::typeof(DiffEqBase.ODE_DEFAULT_UNSTABLE_CHECK), verbose::Bool, timeseries_errors::Bool, dense_errors::Bool, advance_to_tstop::Bool, stop_at_next_tstop::Bool, initialize_save::Bool, progress::Bool, progress_steps::Int64, progress_name::String, progress_message::typeof(DiffEqBase.ODE_DEFAULT_PROG_MESSAGE), progress_id::Symbol, userdata::Nothing, allow_extrapolation::Bool, initialize_integrator::Bool, alias::ODEAliasSpecifier, initializealg::DiffEqBase.DefaultInit, kwargs::_at_KwargsKwargsKwargsKwargsKwargsKwargsKwargsKwargs{…})
_at_ OrdinaryDiffEqCore ~/.julia/packages/OrdinaryDiffEqCore/GMkz9/src/solve.jl:334
[2] __solve(prob::ODEProblem{…}, alg::CompositeAlgorithm{…}, args::GPUTsi_at_Kwargs5; kw_at_KwargsKwargsrgs::_at_Kwargs{…})
_at_ OrdinaryDiffEqCore ~/.julia/packages/OrdinaryDiffEqCore/GMkz9/src/solve.jl:6
[3] __solve(prob::ODEProblem{…}, ::Nothing, args::_at_KwargsPUTsi_at_KwargsKwargs5; kwargs::_at_Kwargs{…})
_at_ OrdinaryDiffEqDefault ~/.julia/packages/OrdinaryDiffEqDefault/MdlB6/src/default_alg.jl:48
[4] __solve(prob::ODEProblem{…}, args::GPUTsit5; default_set::Bool, sec_at_Kwargsnd_ti_at_KwargsKwargse::Bool, kwargs::_at_Kwargs{})
_at_ DiffEqBase ~/.julia/packages/DiffEqBase/6sydR/src/solve.jl:758
[5] __solve(prob::ODEProblem{…}, args::GPUTsit5)
_at_ DiffEqBase ~/.julia/packages/DiffEqBase/6sydR/src/solve.jl:749
[6] solve_call(_prob::ODEProblem{…}, args::GPUTsit5; merge_callbacks::Bool, k_at_Kwargsargsh_at_KwargsKwargsndle::Nothing, kwargs::_at_Kwargs{})
_at_ DiffEqBase ~/.julia/packages/DiffEqBase/6sydR/src/solve.jl:142
[7] solve_call(_prob::ODEProblem{…}, args::GPUTsit5)
_at_ DiffEqBase ~/.julia/packages/DiffEqBase/6sydR/src/solve.jl:109
[8] solve_up(prob::ODEProblem{…}, sensealg::Nothing, u0::SVector{…}, p::SVector{…}, ar_at_KwargshainRulesOri_at_KwargshainRulesOriginatorinators::G_at_KwargsUTsit_at_Kwargs; originator::SciMLBase._at_KwargshainRulesOriginator, kwargs::_at_Kwargs{})
_at_ DiffEqBase ~/.julia/packages/DiffEqBase/6sydR/src/solve.jl:578
[9] solve(prob::ODEProblem{…}, args::GPUTsit5;_at_Kwargssense_at_Kwargslg::Nothing, u0::Nothing_at_Kwargs p::Nothing, wrap::Val{…}, kwargs::_at_Kwargs{})
_at_ DiffEqBase ~/.julia/packages/DiffEqBase/6sydR/src/solve.jl:545
[10] solve(prob::ODEProblem{…}, args::GPUTsit5)
_at_ DiffEqBase ~/.julia/packages/DiffEqBase/6sydR/src/solve.jl:535
[1_at_Kwargs] bat_at_Kwargsh_func(i::Int64, prob::E_at_KwargssembleProblem{…}, alg::GPUTsit5; kwargs::_at_Kwargs{})
_at_ SciMLBase ~/.julia/packages/SciMLBase/hHrRi/src/ensemble/basic_ensemble_solve.jl:235
[12] batch_func
_at_ ~/.julia/packages/SciMLBase/hHrRi/src/ensemble/basic_ensemble_solve.jl:222 [inlined]
[13] #811
_at_ ~/.julia/packages/SciMLBase/hHrRi/src/ensemble/basic_ensemble_solve.jl:294 [inlined]
[14] responsible_map
_at_ ~/.julia/packages/SciMLBase/hHrRi/src/ensemble/basic_ensemble_solve.jl:287 [inlined]
[15] solve_batch(prob::EnsembleProblem{…}, alg::GPUTsit5, ::Ensem_at_KwargsKwargsleSerial, II::Unit_at_Kwargsange{…}, pmap_batch_size::Int64; kwarg_at_Kwarg_at_Kwarg_at_Kwarg_at_Kwargs::_at_Kwargs{})
_at_ SciMLBase ~/.julia/packages/SciMLBase/hHrRi/src/ensemble/basic_ensemble_solve.jl:293
[16] solve_batch
_at_ ~/.julia/packages/SciMLBase/hHrRi/src/ensemble/basic_e_at_Kwarg_at_Kwargssemble_solve.jl:292 [inlined]
[17] macro expansion
_at_ ./timi_at_Kwarg_at_Kwargsg.jl:461 [inlined]
[1_at_Kwarg_at_Kwargs] _at_Kwargs::SciMLBase.var"#800#801"{Int64, Int64, I_at_Kwarg_at_Kwarg_at_Kwarg_at_Kwargst64, _at_Kwargs{}, EnsembleProblem{…}, GPUTsit5, EnsembleSerial})()
_at_ SciMLBase ~/.julia/packages/SciMLBase/hHrRi/src/ensemble/basic_ensemble_solve.jl:188
[19] with_logstate(f::SciMLBase.var"#800#801"{…}, logstate::Base.CoreLogging.LogState)
_at_ Base.CoreLogging ./logging/logging.jl:540
[20] with_logger
_at_ ./logging/logging.jl:651 [inlined]
[21] __solve(prob::EnsemblePro_at_KwargsInt64lem{…}, alg::GPUTsit5, ens_at_KwargsInt64mblealg::EnsembleSerial; traje_at_Kwargsnt64t_at_Kwargskwargsri_at_Kwargss::Int64, batch_si_at_Kw_at_Kwargsnt64r_at_Kwargskwargs_at_K_at_KwargsargsInt6_at_KwargsInt64kwargse:_at_KwargsInt64, progress_aggregate::Bool, pmap_batch_siz_at_Kwarg_at_Kwargsnt64k_at_Kwargskwargsar_at__at_Kwargsnt64w_at_Kwargskwargsrg_at_Kwargss::_at_Kwargsnt64,_at_Kwargskwargs::_at_Kwargs{})
_at_ SciMLBase ~/.julia/packages/SciMLBase/hHrRi/src/ensemble/basic_ensemble_solve.jl:172
[22] __solve
_at_ ~/.julia/packages/SciMLBase/hHrRi/src/ensemble/basic_ensemble_solve.jl:164 [inlined]
[23] __solve(ensembleprob::EnsembleProblem{…}, alg::GPUTsit5, ense_at_KwargsBoolb_at_Kwargskwargsea_at_Kwargsg::Ensembl_at_KwargsBoolG_at_KwargskwargsUK_at_Kwargsrnel{…}; tra_at_Kwargskwargsec_at_Kwargsories::Int64, batch_size::Int64, unstable_check::Function, adapti_at_Kwarg_at_KwargsBoolk_at_Kwargskwargsar_at__at_KwargsBoolw_at_Kwargskwargsrg_at_Kwargsse:_at_KwargsBool,_at_Kwargskwargs::_at_Kwargs{})
_at_ DiffEqGPU ~/.julia/packages/DiffEqGPU/JsBHq/src/solve.jl:10
[24] __solve
_at_ ~/.julia/packages/DiffEqGPU/JsBHq/src/solve.jl:1 [inlined]
[25] #solve#825
_at_ ~/.julia/packages/SciMLBase/hHrRi/src/ensemble/basic_ensemble_solve.jl:359 [inlined]
[26] top-level scope
_at_ ~/xdemExaScale/xdem/odeSolver/test/runODETestsFloat32GPUEnsembleGPUKernel.jl:19
[27] include(mapexpr::Function, mod::Module, _path::String)
_at_ Base ./Base.jl:307
[28] top-level scope
_at_ REPL[3]:1
in expression starting at /home/bernhard/xdemExaScale/xdem/odeSolver/test/runODETestsFloat32GPUEnsembleGPUKernel.jl:19
Some type information was truncated. Use show(err) to see complete types.

I hope I made myself clear and appreciate very much your advice.

Best

Bernhard

Hi @bpeters , welcome to Julia discourse. I think this guide can help you make better discussion format Please read: make it easier to help you.

Thanks, will do.