How to run a HTTP.jl server in parallel, while doing computations in the foreground?

@HanD, this one was particularly hard to sell as a legitimate bug/issue and with quite a few what ifs that needed addressing.

Because many of the threading-related issues here, on discourse, are somehow telling the same story: somebody is not yet familiar enough with the relationship between @spawn and threads and gets into problems just by doing non-recommended stuff - so I think it was expected that people will initially seek to explain away my issue based on that kind of experience.

I mean, at first, even I was almost sure that I must be doing something wrong (I even hoped that I do): so I was the first that needed convincing (if you look at the number of edits - you can see that I initially started with the communication between tasks story which finally evolved into the final issue that you can see here).

But it seems there is hope - maybe it will be picked up soon and labeled as important enough to be addressed. One of the first answers (by @vchuravy) is the following:

Julia uses a libuv based event-loop under the hood. Processing certain things like Timers/IO depend on the event-loop being run regularly. (sleep uses a libuv timer under the hood.)

Looking at jl_process_events it seems like the event loop is only run from tid == 0 or when _threadedregion is set.

jl_enter_threaded_region is only called from threading_run (which is the base function for @threads).

I am unsure why we still have this mechanism instead of allowing any thread to run the libuv event loop.

This is not just another bug that is encountered in some niche scenarios. Although it is rarely experienced at the extreme level you did, the responsivity of :interactive threadpool (when using Timer/IO) is heavily impacted (and the issue gets worse with the number of available threads and spawned tasks). The :default threadpool is impacted at least in the same manner - but I am focusing on the :interactive because there is where your fast yielder or/and short-lived tasks are running - and you want them to be responsive, not battle for a lock on the main thread. It is one of those scenarios where allocating more computational resources will actually decrease the overall responsivity of all tasks while keeping the number of tasks contant (and you just want them to sleep :slight_smile: ).

I am very curious how HTTP.jl benchmarks will look after running on a Julia version with this bug/issue fixed.

3 Likes

A quick update on this. I had another go at my original probilem, and found a workaround. By explicitly allocating threads to the interactive as well as the default pool, and spawning the CPU consuming work on a default thread, I could get it to work as epxected:

$ julia --threads=1,1
julia> using HTTP
julia> HTTP.serve!(Returns(HTTP.Response("ok")))
[ Info: Listening on: 127.0.0.1:8081, thread id: 2
# curl http://localhost:80801 -> ok
julia> fib(n) = return n <= 2 ? 1 : fib(n - 1) + fib(n - 2)
julia> fetch(Threads.@spawn fib(47))
# curl http://localhost:80801 -> still responds, even while computing

Note the first and last lines of the above code block.