Just tested this and it runs as fast (if not slightly faster on average) than python does. I modified your example a little bit to save all the response bytes:
julia> function test(c::Config)
host = Sockets.getaddrinfo("ipv4.download.thinkbroadband.com")
data = asyncmap(1:c.ntask) do i
map(1:c.nbatch) do j
s = TCPSocket()
Sockets.connect!(s, host, 80)
write(s, req)
bytes = read(s)
bytes
end
end
reduce(vcat, data)
end
test (generic function with 1 method)
julia> @time test(Config(100#=ntask=#, 1#=nbatch=#));
7.527014 seconds (326.37 k allocations: 4.853 GiB, 0.65% compilation time)
I should add that there is a python library called uvloop which provides an asyncio compatible hook to the libuv event loop. I think I mentioned it earlier, but I tried using it again when running these benchmarks and got the same results, so this is probably another piece of evidence that the issues lie in HTTP.jl (and perhaps Downloads.jl too since I am finding it to be just as slow if not slower) and not libuv:
In [59]: %time cdatas = asyncio.run(get_items([i for i in range(100)]))
CPU times: user 2.55 s, sys: 1.34 s, total: 3.89 s
Wall time: 7.33 s
In [60]: import uvloop
In [61]: asyncio.set_event_loop_policy(uvloop.EventLoopPolicy())
In [62]: %time cdatas = asyncio.run(get_items([i for i in range(100)]))
CPU times: user 1.98 s, sys: 1.35 s, total: 3.34 s
Wall time: 7.35 s
For comparison with HTTP.jl:
julia> @time asyncmap(url->HTTP.request("GET", "http://ipv4.download.thinkbroadband.com/20MB.zip", status_exception=false).body, 1:100);
13.986817 seconds (420.91 k allocations: 3.919 GiB, 0.19% compilation time)
I think the next step would be to try this with an example downloading different files over HTTPS such as the one I have been looking at so far.