How to perform ongoing background work with a real time ui

Currently, there’s no way to mix throughput-oriented code (the long-running analysis) and latency-oriented code (the “real-time” user interface) using Julia’s multi-threaded scheduler. So, to answer the high-level question of the OP, roughly speaking, there are two ways to do it:

Approach 1: Use (multi-threaded) single-process Julia. Do the scheduling yourself.

This is applicable only if you are making sure that all libraries you are going to use are:

  1. single-threaded,
  2. support suppressing multi-threaded implementation, and/or
  3. provide a way to customize the scheduling policy (e.g., FoldsThreads.j)

You can then use the approach taken by ThreadPools.jl (which is also implemented in FoldsThreads.TaskPoolEx) to suppress all the cleverness inside the Julia scheduler and manually assign the throughput-oriented code to the non-primary workers (aka “background threads”).

If the throughput-oriented code is reasonably simple, this is a decent approach. However, if you are planning to use various external libraries that implement parallel algorithms, this approach does not work.

Note also that inserting checkpoint (at which the computation can be canceled or paused) has to be done manually. If you want to support single-thread use cases, you would need to think about the computation cost and make sure that intervals between the consecutive checkpoints do not exceed the minimum latency you want to provide in the UI.

Approach 2: Separate UI and compute in (at least) two processes.

To leverage the composable parallel programming infrastructure in Julia, a better approach may be to separate throughput-oriented and latency-oriented code into multiple processes (e.g., using Distributed or Dagger). This way, the Julia process(es) running throughput-oriented code can rely on the Julia scheduler to do the right thing. Pausing, canceling, or killing is reasonably straightforward because you can let the OS do it. Of course, communicating and sharing data between processes is harder and tools to support are underdeveloped. But I think there are opportunities to make this better by using something like Arrow.jl.

Better language support for mixing throughput- and latency-oriented code?

There are various discussions and explorations around this. We may get better support for this in the future but there’s no concrete plan yet:

This is an issue orthogonal to how to provide the uniform interface for cancellation, which is what Julio.jl is trying to solve.

5 Likes