Is there a way to enforce (or encourage) strict rules like ladder programming in Julia?

I recently took a course on PLC programming and it taught me a lot about how to make a robust program.

I’m now using Julia to control some semi-dangerous equipment (a laser) and I’m running into issues that wouldn’t generally happen if I were using something simpler like ladder programming for PLCs.

I don’t know much about programming paradigms and how to self enforce rules like how ladder does.
For example when running a ladder program you have no guarantees about execution order so everything must be designed in phases that are connected with particular bits of memory, as well as memory only being allowed to be set and reset by one expression.

I’ve thought a bit about how to implement this but mostly I’ve just come up with code packed full of goto statements.

What programming paradigms / practices should be followed when using a high level language like Julia to control dangerous equipment? How can we enforce these restrictions?

1 Like

This is a complicated question and it’s not really clear if you’re asking about PLC/low-level in particular, or about computation in general.

This is the first time I hear about ladder logic, it seems like a really simple (appropriately for PLC) concurrent model of computation. If you’re interested in models of computation more generally, perhaps these pointers might be useful:

Regarding the tooling support in Julia, I can’t help, as I’m not informed. I believe some AlgebraicJulia packages might be relevant, not sure.

One Julia-specific bit of info is that (unbuffered) Channels are, I guess, inspired by Tony Hoare’s CSP, like in Golang. So I guess one practice you could adopt to avoid low-level details is to always use an unbuffered channel instead of mutexes/locks or semaphores.

1 Like

Thanks for the links! I have lots of reading ahead of me.

I have actually been using Channels a lot in my project since lots of things have to happen concurrently anyway.
Splitting everything into it’s own task could be a good solution and would be similair to ladder.

But that brings up the question of overhead. Is it a good idea to give Julia 1000 tasks and let the scheduler handle it? Should I only use Channels or should I use wait and notify as well?

1 Like

Sorry, I don’t think I’m the best one to answer some of these questions. One thing that might be useful, in case you don’t know about it already, is the OhMyThreads.jl package, with a focus on data parallelism.

1 Like

I have no idea in what far that (ReactiveToolkit.jl) is relevant, but in any case the video is impressive

2 Likes

I’ve used Tasks a fair bit – my production code regularly runs hundreds of thousands of tasks. You’ll see a slight performance hit if you’re trying to create very optimized code, as well as a fair bit of memory allocations. But Tasks and Channels are a very nice way to enforce a highly concurrent model where most computation can run in any order. And if you have dependencies, the easiest way is usually to have a parent task start its child tasks.

If you want extreme generality, you can actually use a Channel{Task} and I use that in some places. But it’s less efficient, so Channels of more concrete specific types are usually better.

With Channels, you rarely need to use wait or notify in my experience.

Here is a relatively simple example using Tasks and Channels to make a threadsafe dict:

function task_consumer(channel::Channel{Task})::Nothing
    for task in channel
    	schedule(task)
    	wait(task)
    end
end

global main_thread_task_queue = Channel{Task}(task_consumer, Inf)

"""
  ChannelBasedDict{K, V}()
Yet another thread-safe dict. 
"""
struct ChannelBasedDict{K, V} <: AbstractDict{K, V}
    dict::Dict{K, V}
    channel::Channel{Task}

    function ChannelBasedDict{K, V}() where {K, V}
        channel = main_thread_task_queue
        dict = Dict{K, V}()
        return new{K, V}(dict, channel)
    end
end

function Base.setindex!(d::ChannelBasedDict{K, V}, v::V, k::K) where {K, V}
    task = @task d.dict[k] = v
    put!(d.channel, task)
    return task
end

function Base.getindex(d::ChannelBasedDict{K, V}, k::K)::V where {K, V}
    task = @task return d.dict[k]
    put!(d.channel, task)
    return fetch(task)
end

function Base.haskey(d::ChannelBasedDict{K, V}, k::K)::Bool where {K, V}
    task = @task haskey(d.dict, k)
    put!(d.channel, task)
    return fetch(task)
end

function Base.get(d::ChannelBasedDict{K, V}, k::K, default::V)::V where {K, V}
    task = @task get(d.dict, k, default)
    put!(d.channel, task)
    return fetch(task)
end

function get_nonblocking!(f::Function, d::ChannelBasedDict{K, V}, k::K)::Task where {K, V}
    task = @task begin 
        if !haskey(d.dict, k)
            d.dict[k] = f()
        end
        return d.dict[k]
    end
    put!(d.channel, task)
    return task
end

function Base.get!(f::Function, d::ChannelBasedDict{K, V}, k::K)::V where {K, V}
    return fetch(get_nonblocking!(f, d, k))
end

function Base.length(d::ChannelBasedDict{K, V})::Int where {K, V}
    task = @task begin
        return length(d.dict)
    end
    put!(d.channel, task)
    return fetch(task)
end

function Base.iterate(d::ChannelBasedDict{K, V}) where {K, V}
    task = @task begin
        return iterate(d.dict)
    end
    put!(d.channel, task)
    return fetch(task)
end

function Base.iterate(d::ChannelBasedDict{K, V}, index) where {K, V}
    task = @task begin
        return iterate(d.dict, index)
    end
    put!(d.channel, task)
    return fetch(task)
end
3 Likes

Task is a concrete type (isconcretetype(Task)), so I don’t really understand what you mean.

The size of a Task object was considerably improved somewhat recently, at least it seems like it from this PR :rocket:

1 Like

You’re right, I should have written “specific” instead of concrete.

Petri nets mentioned!

I think the PN infrastructure in AlgJulia is not currently up for these kinds of tasks. It has been used so far mostly to represent chemical reaction networks and generate ODEs or Markov chains based on the PN specification of the reaction network, although it would be straightforward to extend the tooling to this more classic PN role.

GitHub - AlgebraicJulia/AlgebraicRewriting.jl: Implementations of algebraic rewriting techniques like DPO, SPO, SqPO. which implements graph rewriting (DPO, SPO, etc) might be ready for these models of computation. @kris-brown would likely know more.

3 Likes

AlgebraicRewriting is experimental software but it provides a language for declaring rules (preconditions => postconditions) where these conditions can be instances of some in-memory database with a schema of your choosing (whereas, in my understanding of petri nets, the pre/postconditions are just discrete collections of tokens). One can then apply these rules to in-memory databases of the same schema, and there are a variety of ways to schedule the events.

I’ve only tried using this framework in order to run little simulations, so it’d be interesting to see what it would mean to use this for controlling live equipment. Also, I’ll be working this summer on performance considerations, but for the time being it wouldn’t be appropriate to use for something very computationally intensive.

1 Like