How to perform ongoing background work with a real time ui

I’m trying to perform a long running analysis with a real time user interface. Ideally I can do this with or without multiple threads (obviously it could be more efficient with multiple threads) and an ability to easily pause or terminate the background analysis.

For example, my long running analysis is counting to infinity and my ui is the Julia REPL.

I have this saved as file.jl:

x = Threads.Atomic{Int}(0)
task = Threads.@spawn while true
    Threads.atomic_add!(x, 1)
println(@doc Base.throwto)

But its behavior is unpredictable and thread count dependent:

julia -t1 file.jl
julia -t3 file.jl
julia -t10 file.jl
No documentation found.

`Base.throwto` is a `Function`.

# 1 method for generic function "throwto":
[1] throwto(t::Task, exc) in Base at task.jl:785

What are some natural ways to achieve this ongoing background work + real time ui task with Julia?

I think you’re after Tasks · The Julia Language

One simple change is instead of using throwto, have the thread doing the work check each iteration if it should stop. That avoids a lot of “cancellation” issues for which there’s a lot written about if you search Discourse or Zulip (, or check out @tkf’s work like

Currently, there’s no way to mix throughput-oriented code (the long-running analysis) and latency-oriented code (the “real-time” user interface) using Julia’s multi-threaded scheduler. So, to answer the high-level question of the OP, roughly speaking, there are two ways to do it:

Approach 1: Use (multi-threaded) single-process Julia. Do the scheduling yourself.

This is applicable only if you are making sure that all libraries you are going to use are:

  1. single-threaded,
  2. support suppressing multi-threaded implementation, and/or
  3. provide a way to customize the scheduling policy (e.g., FoldsThreads.j)

You can then use the approach taken by ThreadPools.jl (which is also implemented in FoldsThreads.TaskPoolEx) to suppress all the cleverness inside the Julia scheduler and manually assign the throughput-oriented code to the non-primary workers (aka “background threads”).

If the throughput-oriented code is reasonably simple, this is a decent approach. However, if you are planning to use various external libraries that implement parallel algorithms, this approach does not work.

Note also that inserting checkpoint (at which the computation can be canceled or paused) has to be done manually. If you want to support single-thread use cases, you would need to think about the computation cost and make sure that intervals between the consecutive checkpoints do not exceed the minimum latency you want to provide in the UI.

Approach 2: Separate UI and compute in (at least) two processes.

To leverage the composable parallel programming infrastructure in Julia, a better approach may be to separate throughput-oriented and latency-oriented code into multiple processes (e.g., using Distributed or Dagger). This way, the Julia process(es) running throughput-oriented code can rely on the Julia scheduler to do the right thing. Pausing, canceling, or killing is reasonably straightforward because you can let the OS do it. Of course, communicating and sharing data between processes is harder and tools to support are underdeveloped. But I think there are opportunities to make this better by using something like Arrow.jl.

Better language support for mixing throughput- and latency-oriented code?

There are various discussions and explorations around this. We may get better support for this in the future but there’s no concrete plan yet:

This is an issue orthogonal to how to provide the uniform interface for cancellation, which is what Julio.jl is trying to solve.


Thank you!

I’ll go with approach 2 and rely on the OS to do isolation and prioritization. This seems like the most reliable option you’ve mentioned.

For better language support, one possibility is to provide tasks with priorities. Each task would have a scheduling priority, and task creation would, by default, give the new task the same priority as the old, but could be changed to a higher/lower priority. If A spawns B at high priority, and then B spawns C at low priority, and C spawns D at low priority, the global priority ordering would be B, C, D, A.

In my case, I would simply be able to spawn my infinite loop at low priority. This feels easier and composable than approach 1, lower overhead and more portable than approach 2, and more robust/high level/user friendly/abstract with possibility for future internals improvement than either PR.

As a programmer, I know that one task is more important than another, but I don’t know how many threads the machine will have, how many threads Julia will launch with, or weather my package will need to share resources with other long running or low latency programs upstream. I like to tell the compiler/scheduler/language exactly what I know best (task priority) and nothing more (scheduling).

Julia tasks do have priority but it’s not exposed outside the C code and currently it’s not used in a meaningful way. This is actually good since users can switch to, e.g., a work-stealing scheduler ( which is more efficient in a wide class of programs but does not have a way to support priority at least naively.

But, task priority is not the hard part. We need a preemptive task system to actually pause low-priority tasks to (re)schedule higher-priority tasks. However, we can’t stop the task at an arbitrary point in the user code because Julia’s GC cannot tell which objects are in use (it has to wait for the user code to hit a safepoint) and the GC has to scan non-active tasks. Supporting this would be a big surgery in the runtime.