Peformance Benckmark of HTTP.jl

Hi,
Does any one one have a proper performance bench mark of HTTP.jl . I have build same service in 3 different framework GinTonic-GO, HTTP.jl - Julia, Flask-Restful - Python. I though I will get a lot better performance on Go than Julia or Python. But it seems like Flask-Gunicorn based api still wins the race. May be the app is to simple to compare the performance. I have deployed them on google cloud Run and created a client in GO to send concurrent Post request and log the request time for each request.

benchmark

But also time to time I got some server log in the google cloud log but did not know what is the reason for julia and if it is some sort of error.

When running with 2 processor and spanning a separate task thread the julia app was crashing for more that 1000 call. This repository has all the apps. The analysis and data are in that repository. Though I have removed the actual server end points just trying to avoid any random request.

I would try something like this:

function request_handler(req)
    ch = Channel{Response}(1; spawn=true) do
        obj = HTTP.handle(ROUTER, req)
        return HTTP.Response(200, JSON3.write(obj))
    end
    return take!(ch)
end

Granted that not going to keep a thread free like your Worker.jl appears to be trying to do. You might try looking at:

To schedule tasks off the primary thread.

You might also want to use the HTTP.Stream interface of HTTP.jl so you can write the responses inside the spawned task. I believe HTTP.jl hasn’t been updated to run on multiple threads so all the responses are written by the primary thread.

2 Likes