[ANN] Signals.jl: Faster and more versatile reactive programming



Hi everyone, I am happy to announce Signals.jl, a fast, dynamic, functional reactive programming framework for Julia.
install using Pkg.add("Signals")

Inspiration for this package comes , obviously, from Reactive which I use extensively… who’s pitfalls (from my point of view) led me to try out my own implementation, this turned out to be a very hard task indeed because:
a) Reactive is already abstract simple and fast, it was hard to compete with its performance

b) It is not trivial at all to offer the full range of Reactive programming while still maintaining a simple data structure

c) I wanted to allow both pushing into the signal graph as well as pulling

d) I wanted to give the user easy control over async/nom-async opeartions

e) tiny seemingly equivalent mutations of the code , caused dramatic changes in compiled code and performance

But after some long nights, hours of frustration and a few “eureka” moments, I prevailed and came from the other side with a deeper knowledge of Julia.

Signals.jl, while offering the same functionality as Reactive is different on some key factors

  • Dynamic: Signals are not typed, you can push an integer then float64 and then a string and it blends nicely with julia’s Multiple Dispatch

  • Push-Pull: you can either push a value into a Signal and propagate changes along the Signal graph , or you can set a value without any propagation and only pull the necessary changes from any other signal.

  • Syntax: Syntax is somewhat simplified , square brackets to set or query a value, round brackets to pull or push
    a value (see documentation for more examples)

  • Signal graph: Signals.jl does not maintain any internal data structure other than the signals themselves

  • Eventloop: the event-loop in Signals.jl is dirt simple and handles world-age issues gracefully by restarting itself. As long as you don’t create Signals programatically you should encounter just a couple of event loop restarts.

  • Performance: Signals.jl is between 2X to 4X faster than Reactive on my machine, on various benchmark’s I made.

  • “Strict” and “Soft” pushes: If you push into a signal several times before the event-loop processes the Signal graph then only the last update will be considered, this is called a “soft” push and is the defualt behaviour. If you depend on the fact that every push will run independently , then you can change the behaviour of individual nodes to have a “strict” policy for push.

  • Non-Signal inputs: There is no restriction in Signals.jl as for the type of inputs arguments that go into a Signal.
    just that input arguments which are Signals themselves get replaced by their value before performing the Signal action


Signals are pull-based, pulls are a synchronic operation, it starts and and it completes now.
Pulls are minimal: if a Signal has valid data stored in it , then no action takes place when pulling its value.
Pushes are achieved by running down the Signal graph and enqueuing pulls on terminal nodes.

Puling is a 2 step operation , this gives the signal graph an opportunity to re-validate itself and thus implement
operators such as drop-repeats or filtering.

Signal action is typed on its arguments , somehow this gave the best performance boost for code generation.

Thats it for now, read the docs , give it a spin , point out more optimisation and I will be happy to integrate it.

Moreover, for those of you who care … the code was written such that it would be simple to follow the logic behind the implementation. try mutating the code such as making it type stable instead of dynamic , mutable instead of struct etc, and see for yourself how the compiled code is changing.

for conveniance there are benchmarks as part of Pkg.test("Signals") so it is easy to see impact of changes.



Super exciting!!! Looking forward to trying it out! Thanks!


Would love to see a spreadsheet built using Julia, like pyspread, as a result of this.


Yes that would be interesting,I did a quick test on a simple use case for spreadsheets and this is what I found:

The limitation is that the signal graph is currently traversed
using recursion… this puts a limit on the maximal depth the signal graph can have.

For example if I have 1e6 signals derived from one signal as in

using Signals
using BenchmarkTools
A = Signal(1)
sigs = [Signal(+,A,i) for i=1:1_000_000]

@benchmark A(10)

Then everything is good and this benchmarks at around 250ms on my machine.(although creation time can be improved)

However if the Signal graph was not flat but chained (think cumulative sum in Excel ) then there is a limit,
anything deeper than 20,000 will probably hang the system or cause a StackOverflow error.

There are some mutations to the internal design that can address that:
Replace recursion with a deterministic graph traversal … it will be hard to do so while still maintaining the same


Very cool! Just curious, what was the use case which forced you to implement this over just using Reactive?


Hi @shashi,
There wasn’t a specific blocker use case that led me into this …

I have an application where one event loop in another process handles object tracking, and the main process runs its own rendering even loop based on signals, and in some cases usually due to bad design or bugs I would get “queue full” messages from Reactive.
This got me thinking about the soft-push strict-push thing, and about pulling instead of pushing … and the possibility of better optimisation because running f(g(x)) gives a chance to the jit to optimise , where as running g() then f() doesn’t.

I think the best answer is that it just somehow fascinated me… this functional reactive programming thing … as it is natural to the way I construct algorithms and control systems… and I could envision it as the basis for new way of debugging/developing where everything is a signal.

I liked the compactness and simplicity and abstractness of your code in Reactive, that too gave me inspiration to experiment with actually writing Reactive from scratch.

The whole process felt more like writing math than writing code.


Fair enough :slight_smile: I’d like to get your feedback on https://juliagizmos.github.io/Observables.jl/stable/. It’s a synchronous-only library closer to signals-and-slots.


This is getting to be a crowded area. There’s also:

It’d be interesting to compare all these. Maybe there’s a chance to consolidate.


@tshort, From what I am seeing in your code… and I may be wrong … then first it is not dynamic.

I think of signals as result of functions and in Julia functions do not have a fixed return type. Therefore I feel a dynamic signal
is more natural to the user.

In general I think types are there only to help package writers write code… it should be rarely visible to the user. The people in my company whom I try to “convert” to Julia are frequently complaining about “over-typing”.

Second, it is not async , but that is usually not a hindrance, but there can be easily excessive intermediate calculation of the signal graph.

for example:
if the signal graph would look like:

A -> B
B -> C
A -> D
(C,D) -> E

then E will be updated twice on every push to A.
And the first update will be “illogical” because it mixes 2 states of the signal graph.


Likewise I would like to get your feedback on Signals.jl


This is an excellent point that generalizes outside this discussion, thanks for making it.


@shashi , from what I am seeing in Observable.jl , I like the notion of Signals as Ref’s … and the syntax of square brackets.
I too use those notions in Signals.jl.

However Observables suffers from the same “problem” of ReactiveBasics of being push based … there can be excessive computations in the signal graph, as well as inconsistencies.

Another thing: from the document string of map!

The second argument `o` must be an oservable ref for
dispatch reasons.

This is why I chose to let go of the syntax map! and instead use the syntax

Signal(f, args...)

and remove the restriction of type for dispatch reasons.


@TsurHerman, your analysis of ReactiveBasics is generally right. I chose synchronous operation for code simplicity and for performance.

As a side nit, Julia functions generally do have a fixed return type.


Very interesting! One question: Is your implementation thread-safe? Also, might this work with multiple processes, on day?


in the non-async mode , if you have several separate signal graphs (disjoint graphs) then you can push values to each of these graph from different threads and everything works fine.

However when the graphs are not disjoint ,even if there was a guard on each individual signal action that made sure only a single thread operates an action in any given time
Then still there could be inconsistencies in the signal graph were a signal who’s action is f(args...) will have a valid value val and that val != f(args...)

my vision for threading and parallelism is to somehow process signal actions in parallel, because it is easy to implement such a per signal action threading guard.
And the hierarchy of what needs to complete before what is also clear.
Then in theory we could have a multi-threaded eventloop and writing code in signals will have an auto-task-level-parallelism effect.

But there are more subtleties … and it is not trivial …
I invite the bright minds of this forum to help out on this issue :slight_smile: .

I was thinking that if this works , then some part of base could benefit from task level parallelism ,mainly inference and compilation.


Thanks, sounds like an exciting plan!