Building large, modular models in Julia

Hi, I’ll soon start building a modelling (and simulation) framework. There will be several “types” of model components (think “heater”, “valve”), with different “subtypes” (think “coil heater”, “electrically actuated proportional valve”). An overall model will be built using several components, interacting with each other.

Everything will be “one way”, that is, according to the physical causality. So I do not need (nor want) equation-based modelling as provided e.g. by Modelica (and Modia.jl). The idea is to just assemble the overall model from all components and simulate it with DifferentialEquations.jl.

My question now is, what’s a good design pattern for such a thing in Julia? Besides the components, there needs to be a way of defining the overall model, i.e. listing the components and their interactions (especially which outputs of components are the inputs to other components). Then, there’s a simple topological sorting based on this graph to derive the execution order. When and how should this be done? And how, accordingly, is the overall model assembled from its components?

Probably there’s already some package (under development) for this? Or for some sub-tasks required for it? (BTW, what would be the most suitable graph library?)

Thanks for all your inputs!


Depends on what you want a “model” to do. Eg if you need a function/callable for your purposes (examples are log likelihood, ODE definition, etc), you can just define the components/submodels as callable objects or methods of a particular function, and then implement this for the composed model.

1 Like

BTW, what would be the most suitable graph library?

I would suggest that you check LightGraphs, and for a concrete application perhaps related to your problem, HybridSystems, which relies on LightGraphs to represent hybrid automata.

The goal would be to define an overall ODE. I guess the way to go is to do the topological sorting, indexing into and out of the overall state-vector, and actual “connecting” (of outputs of some components to inputs of others) at compile time by macros. One could then define connections in terms of component and signal names. The macro translates this into indices (and a graph for the toposort).

So essentially you want to build a Julia-based causal modeling tool like Simulink? I don’t think there’s really a package for it right now, but it would be an interesting (and difficult) project.


Yes, you could put it that way. At least the basic idea. The front-end does not need to be visual, I anyway prefer code and the framework will be used (to actually build models, not running simulations comfortably via some web front-end :wink: ) by me and probably 1 other guy. However, the back-end (or, more exactly, the separation of concerns) should be much cleaner than it is in Simulink… such that one can efficiently work with (and on) it.

My current idea is that the Julia code is the single source of truth. (A visual front-end would just visualise this truth, and interactions with this view would change the code, in turn updating the view.) The question now is how this code looks like. Starting with functions/methods of model components, one would have to manually set up the full model, including execution order. Something like (u being input, x the state, y outputs):

# model function for a specific component type
mdl_function(::MyComponentType, dx, x, u, p, t) = ... 

u_const = ... # some outer input to the overall model
function full_model(dx, x, p, t)
    x_A = @view x[1:3]
    x_B = @view x[4]

    y_A = mdl_function(component_A, view(dx, 1:3), x_A,          u_const, p, t)
    y_B = mdl_function(component_B, view(dx,   4), x_B, (y_A[1], x_A[2]), p, t)

That’s OK when playing around, but when setting up many models and changing the signature of some components, it gets ugly. So it would be nice to just state what components belong to a model (or assembly), as well as the connections between the components:

full_model = @model component_A component_B
@connect (component_A, :y, 1) (component_B, 1)
@connect (component_A, :x, 2) (component_B, 2)

I guess the @connect macros would have to receive full_model as well, or lie inside the @model macro. But the basic idea is the same. One could also think about specifying connections by names or “paths” (e.g. "name of component A/y/name of output 1, if names are given to the variables of a component… but that’s too detailed for now.

So, what’s the best way of setting this up? What macros, nested in which way, …? The “head”, e.g. @model, needs to get a list of all components and connections, in order to toposort for execution order and to generate the state-indexing and input-tuple expressions.

Probably macros are not the way to go here. One reason is that when macros are used, one has to define everything in-place. That is, in the above example, I could not define the connections in some variable(s) and then invoke the macro using this variable(s) as argument(s), e.g.:

connection_1 = ( (component_A, :y, 1), (component_B, 1) )

Of course, one can do this with a function defining a closure. However, there’s quite some discussion around closures (i.e. captured variables) not being efficient in Julia…? I quickly tried it for a simple example resembling my use case; the idea is to assemble two functions into one, working on the same array:

# the two functions to assemble into one
function f1(out)
    out[1] = -1
    out[2] = -2
    return nothing

function f2(out)
    out[1] = -3
    return nothing

# manual solution
function fTot(out)
    return nothing

Using a macro or a closure:

# macro
macro assemble(f1Info, f2Info)
    n1 = f1Info.args[2]
    n2 = f2Info.args[2]

    return quote
            $(esc(f1Info.args[1]))($(esc(view))(out, 1:$n1))
            $(esc(f2Info.args[1]))($(esc(view))(out, $(n1 + 1):$(n1 + n2)))
            return nothing

# closure
function assemble(f1Info, f2Info)
    n1 = f1Info[2]
    n2 = f2Info[2]

    return function(out)
        f1Info[1](view(out, 1:n1))
        f2Info[1](view(out, n1+1:n1+n2))
        return nothing

Benchmarking the three solutions (manual, macro, closure) yields 11ns for the former two, and above 16ns for the latter. That’s not a big difference in absolute runtime, I know. However, the generated code (@code_warntype) is twice as long for the latter. Any ideas on how to improve this?

We have some examples of how to achieve this modular scheme for optimization problems with JuMP.jl in PowerSystems.jl and PowerSimulations.jl. We developed a type system to perform multiple dispatches so we can call generic model building functions.

I would be interested to see how to apply a similar scheme with DIffEqs.jl to get the same level of modularity.

We are building this with ModelingToolkit.jl: .


Right now I use my own “framework”. It separates dynamic components (which calculate state derivatives) and algebraic ones (just functions of their inputs, parameters and time). Then there are assemblies, which are constructed from arbitrarily many components (or other assemblies) and connections between them. The expose only specified inputs and outputs (to make the “interface” only as complex as necessary). Large/complex models can be constructed by nesting such assemblies. The overall-model function is then automatically built by recursively flattening all assemblies (or “subsystems”) bottom-up, building the “path” of each component for e.g. setting/getting its parameters or state/algebraic variables, building a dependency graph and applying a topological sort to determine execution order. (This is done twice; once for the dynamics, i.e. everything required to calculate all state derivatives, and once for all the algebraic variables, which can be calculated from the ODE solution afterwards.)

The overall-model code-generation handles indexing of state variables and resolving connections (i.e. constructing input tuples to the component functions). Right now, everything is passed around as tuples since the length and types of the signals are known at model-build time. Combined with the in-place form of the model function for DiffEq, no allocations happen so everything is really fast. One could even get the code for the overall model and generate a source file with it, then compile it into the system image (or even an executable) using PackageCompiler to get statically compiled models/simulations (tried it, works).

I also implemented some macros to make construction of components and assemblies easier. And at that point I noticed that I’m re-doing stuff that’s already there multiple times (JuMP, DiffEq, Modia, …) and will probably be implemented in a much more coherent and thought-out way in the upcoming ModelingToolkit. So I’d rather contribute there than contribute just another framework that does very similar things.

For example,

@dynamic :twoPT1 "two PT-1 elements in series" begin
    dx[1] = (u[1]-x[1])/tau[1]
    dx[2] = (x[1]-x[2])/tau[2]

    @parameter :tau "PT-1 time constants" [1.0, 5.0]

creates a dynamic component, and

@assemble :my_assembly "just an assembly" begin
    @components compA compB
    @parameter  :V̇ "constant volumetric flow (m³/s)" 1.3
    @connect (  u          -> compA[1:3], # u is the assembly's inputs, no indices means [:]
                p.V̇        -> compA[end], # parameters can be used as input to components
                compA      -> compB,
                compA[1]   -> y[1], # y is the assembly's outputs
                compB[2:3] -> y[2:3] )

constructs an assembly.


Is it also possible to sequentially compose systems with ModelingToolkit?

Just setup a parameter flip

I do not know what you mean with a parameter flip. Could you elaborate?

Change the system in a callback by flipping a parameter.