Suggestions to wrap the inputs for a Diff Equation solver (types and structures)

Hello, I’m currently developing a package of a numerical solver to a certain kind of Stochastic PDE (Stochastic Neural Field Equations) in 1 and 2-dimensional space in a DifferentialEquations.jl fashion, i.e. with the user interface:

prob = probSNFE(inputs)
sol  = solveSNFE(prob)

And I’m having a hard time to properly wrap my inputs in order to maintain a fairly simple user interface like the one I’ve mentioned.
Let me be more specific, the equation has the form:


where the functions I, K and S are given as inputs.

So, we can say that there are 3 types of inputs:

  1. The equation’s parameters:
    • α - Temporal decay
    • v - Propagation velocity
    • V0 - Initial condition
    • ϵ - Noise level
    • ξ - Correlation parameter
    • np - Number of trajectories
  2. The functions:
    • I, K and S
  3. The domain’s parameters:
    • L - Field’s length
    • N - Number of nodes to discretise space
    • T - Time span
    • n - Number of nodes to discretise time

My first approach was to wrap the equation and domain’s parameters in 2 tuples and defining the 3 functions, in order to have the following:

using SNFE
prob = probSNFE(tupEqparam, tupDomain, I, K, S)
sol  = solveSNFE(prob,saveat)

This approach is a bit messy when it comes to declare the parameters in the right order in the tuples. The probSNFE signature is a bit too long and the most important, I’m not seeing any way to use multiple dispatch to distinguish 1D from 2D case.

To overcome some of these issues, I’ve decided to export 2 structures (for 1 and 2D) and use them as inputs to probSNFE.

abstract type Problem end
abstract type OneDim <: Problem end
abstract type TwoDim <: Problem end
struct Input1D <: OneDim
    α::Float64
    v::Float64
    # ...
    I::Function
    #...
end
struct Input2D <: TwoDim
    #...
end

function probSNFE(in1d::Input1D)
    # stuff
end

function probSNFE(in2d::Input2D)
    # stuff
end

So, the user has to do something like this:

using SNFE #load the package
I(x,t)=...
K(x)=...
S(V)=...
in1d = Input1D(α,v,thresh,V0,L,N,T,n,I,K,S)
prob = probSNFE(in1d)
sol  = solveSNFE(prob,saveat,[ϵ,ξ,np])

The only con I see here is the user has to know the order of the inputs to create the structure in1d.

Am I approaching this in a good (“julian”) way? Is it bad to functions be fields of a structure? Any suggestions to improve?

Thank you,
Tiago

Have tried using Parameters?

using Parameters
@with_kw struct A
   a :: Int64 = 1
   b :: Float64 = 3.0
end

a = A(a = 3) # sets a with default b
a = A(a = 3, b = 5.) # sets both
a = A(b = 1.0) # sets b with default a
1 Like

Yes, I thought using Parameters.jl, the problem is that I have functions to pass as inputs too.
I didn’t tried by myself, but in the official documentation they don’t contemplate that case, they only deal with parameters, Floats, Strings, etc.

So, I discarded the possibility of using Parameters.jl :confused:

It seems to work (actually I am using it for that just now):

julia> using Parameters

julia> @with_kw struct A
          x::Float64
          f::Function
       end
A

julia> a = A(x = 1.0,f = isequal)
A
  x: Float64 1.0
  f: isequal (function of type typeof(isequal))


julia> a.f(1,2)
false

julia> a.f(1,1)
true

Edit:

One thing to consider, however, is the possibility of organizing the parameters in a struct, but the functions as closures. This is more typical in Julia:

struct Data
  a :: Float64
end
struct SolverParameters
  nsteps :: Int64
  tol :: Float64
end

f(x,data) = data.a*x^2

data = Data(1.0)
solver_parameters = SolverParameters(100,1e-4)
x0 = 10.

solver(x -> f(x,data),x0,solver_parameters)

That way the solver becomes completely agnostic the data required to compute the functions.

2 Likes

Hi,

In addition, you should not do

struct A
f::Function
end

but rather

struct A{Tf}
f::Tf
end

I am sure that you know the following paper but just in case…

It allows to compute the delayed term with an FFT.

If you use implicit methods, you will need to call GMRES for which DE is not completely ready yet ( I opened. an issue but cant find it :frowning: ).

2 Likes

Oh that’s cool, Parameters.jl works with functions! I didn’t know that, well that’s a good surprise.

One thing to consider, however, is the possibility of organizing the parameters in a struct, but the functions as closures.

What you suggested here is a good option thanks! But when you write solver(x -> f(x,data),...) this will not make my code slower? When talking about performance tips people always says to avoid mapping, but I don’t know if it is applied when passing as input to a function.
In my case I would have 3 functions to pass, so the signature would be something like this

struct Input1D <: OneDim
    α::Float64
    v::Float64
    # ...
end
input1d = Input1D(α,v,thresh,V0,L,N,T,n)
prob1d = probSNFE(x -> I(x,data),x -> K(x,data),x -> S(x,data),input1d)
sol  = solveSNFE(prob1d)

The function signature will be a bit big, but I think it’s acceptable. In practical terms, x will be a vector in 1D case, but in 2D it must be used 2 vectors:
probSNFE((x,y) -> I(x,y,data),(x,y) -> K(x,y,data),x -> S(x,data),input2d)
I’m not used to this syntax, probably this is not well written, but the key idea is here.

Hello! Thanks for the tip of declaring a function in a structure, the compiler knows that the field f is a function in advance?

It’s funny that you’ve mentioned this paper, because this is the algorithm (with some tweaks) that I’ve implemented in this solver! And it’s by far the fastest algorithm when dealing with delayed NFE

Not because of that. The user has to be careful in not using non-constant global variables for the data, but that is all:

julia> function solver(f,x0)
         s = 0.
         for i in 1:length(x0)
           s += f(x0[i])
         end
         s
       end
solver (generic function with 1 method)

julia> f(x,a) = a*sin(x)
f (generic function with 1 method)

julia> a = 2. # bad, non-constant global
2.0

# Note the type-instabilities (Any)
julia> @code_warntype solver(x->f(x,a),x0)
Variables
  #self#::Core.Compiler.Const(solver, false)
  f::Core.Compiler.Const(var"#7#8"(), false)
  x0::Array{Float64,1}
  s::Any
  @_5::Union{Nothing, Tuple{Int64,Int64}}
  i::Int64

Body::Any
1 ─       (s = 0.0)
│   %2  = Main.length(x0)::Int64
│   %3  = (1:%2)::Core.Compiler.PartialStruct(UnitRange{Int64}, Any[Core.Compiler.Const(1, false), Int64])
│         (@_5 = Base.iterate(%3))
│   %5  = (@_5 === nothing)::Bool
│   %6  = Base.not_int(%5)::Bool
└──       goto #4 if not %6
2 ┄ %8  = @_5::Tuple{Int64,Int64}::Tuple{Int64,Int64}
│         (i = Core.getfield(%8, 1))
│   %10 = Core.getfield(%8, 2)::Int64
│   %11 = s::Any
│   %12 = Base.getindex(x0, i)::Float64
│   %13 = (f)(%12)::Any
│         (s = %11 + %13)
│         (@_5 = Base.iterate(%3, %10))
│   %16 = (@_5 === nothing)::Bool
│   %17 = Base.not_int(%16)::Bool
└──       goto #4 if not %17
3 ─       goto #2
4 ┄       return s

Declare the data as constant is ok:

julia> const b = 2.  # Ok, constant
2.0

# no more type-instabilities:
julia> @code_warntype solver(x->f(x,b),x0)
Variables
  #self#::Core.Compiler.Const(solver, false)
  f::Core.Compiler.Const(var"#9#10"(), false)
  x0::Array{Float64,1}
  s::Float64
  @_5::Union{Nothing, Tuple{Int64,Int64}}
  i::Int64

Body::Float64
1 ─       (s = 0.0)
│   %2  = Main.length(x0)::Int64
│   %3  = (1:%2)::Core.Compiler.PartialStruct(UnitRange{Int64}, Any[Core.Compiler.Const(1, false), Int64])
│         (@_5 = Base.iterate(%3))
│   %5  = (@_5 === nothing)::Bool
│   %6  = Base.not_int(%5)::Bool
└──       goto #4 if not %6
2 ┄ %8  = @_5::Tuple{Int64,Int64}::Tuple{Int64,Int64}
│         (i = Core.getfield(%8, 1))
│   %10 = Core.getfield(%8, 2)::Int64
│   %11 = s::Float64
│   %12 = Base.getindex(x0, i)::Float64
│   %13 = (f)(%12)::Float64
│         (s = %11 + %13)
│         (@_5 = Base.iterate(%3, %10))
│   %16 = (@_5 === nothing)::Bool
│   %17 = Base.not_int(%16)::Bool
└──       goto #4 if not %17
3 ─       goto #2
4 ┄       return s

Or add a function barrier (in other words, everything that must be fast is preferably inside a function):

# Good: add barrier so that the solver is called from inside a function, and then
# everything is type-stable:
julia> barrier(f,x0,a) = solver(x-> f(x,a),x0)
barrier (generic function with 1 method)

julia> @code_warntype barrier(f,x0,a)
Variables
  #self#::Core.Compiler.Const(barrier, false)
  f::Core.Compiler.Const(f, false)
  x0::Array{Float64,1}
  a::Float64
  #11::var"#11#12"{typeof(f),Float64}

Body::Float64
1 ─ %1 = Main.:(var"#11#12")::Core.Compiler.Const(var"#11#12", false)
│   %2 = Core.typeof(f)::Core.Compiler.Const(typeof(f), false)
│   %3 = Core.typeof(a)::Core.Compiler.Const(Float64, false)
│   %4 = Core.apply_type(%1, %2, %3)::Core.Compiler.Const(var"#11#12"{typeof(f),Float64}, false)
│        (#11 = %new(%4, f, a))
│   %6 = #11::var"#11#12"{typeof(f),Float64}
│   %7 = Main.solver(%6, x0)::Float64
└──      return %7

julia>

For this simple example you may not see differences in benchmark because of that type-instability caused by the non-constant global, but in general you will have problems with that.

1 Like

Nice example!

1 Like

If you are careful about using FFTplan to decrease allocations, this will be even faster. Plus, you can use it on the GPU without changing the code. I do this for a paper with 3d NFE and bifurcation analysis.

1 Like

It is basically how you set up an ODEProblem

Oh thanks, that post elucidated me! I will try this approach with my case, I just have to write down this concepts and apply it to my solver.
I’ve never seen a function barrier used as you wrote, only to do the core of computations like loops.
Thanks, appreciated!!

Yes, I’m using plans, actually I’m using real fft plans, plan_rfft, since my data is real. Use it on the GPU? Sorry, I didn’t understand this one :confused:

Just for curiosity, did you used (or implemented) the paper that we’ve talked about?

Yes, I’m using plans, actually I’m using real fft plans, plan_rfft , since my data is real. Use it on the GPU? Sorry, I didn’t understand this one :confused:

You can simulate the whole thing on GPU, this gives massive speedup

Just for curiosity, did you used (or implemented) the paper that we’ve talked about?

No

Thanks! I’m now reading about running Julia on GPU thanks!!