Say I’m running a HTTP.jl server, that serves two endpoints /one and /two.
Requests to /one will start a computation-based task asynchronously and return a Response with a 200 status code. This task can be assumed to take some time to execute.
On the other hand, /two is as an endpoint that doesn’t start a compute-intensive task but is used for logging purposes (i.e., the client uses the response to perform updates/logging that is based on the server’s internal state that is returned). This response can essentially be instantaneously returned.
using HTTP
const ROUTER = HTTP.Router()
HTTP.@register(ROUTER, "POST", "/one", start_compute_intensive_task)
HTTP.@register(ROUTER, "GET", "/two", get_logging_values)
Now, suppose a client makes a POST request to /one, followed by a GET request to /two.
The task started by start_compute_intensive_task essentially takes up all the compute provided by the server and thus significantly delays the request(s) made to /two from being handled.
Is there any way, to assign say one thread dedicated to handling requests made to /two?
Is there any other way I can ensure that requests made to /two are always instantly handled, even if there’s a computationally intensive task started by start_compute_intensive_task running already?