I was wondering how the triplet of OS, libuv and Julia handles the scenario where a TCP socket is constantly written to but never read from on the peer’s side, so I came up with this little test script.
s = listen(1234)
try
local c, cs
@sync begin
@async cs = accept(s) # Only to open the connection, will never be read
c = connect(1234)
end
try
i = 0
data = zeros(UInt8, 1000)
while true
write(c, data) # Repeatedly write to the connection...
i += 1
println(i) # ... and print some output so we can monitor progress.
end
finally
close(cs)
close(c)
end
finally
close(s)
end
The output suggests that write(c, data)
fills up some buffers and then blocks indefinitely once the buffers are full. I can see how that’s a sensible default, but in the case of a server handling multiple client connections it seems more appropriate to close the slow connection instead of risking that one bad client could take down the whole server. I believe the recommended way to do this in Julia would be something like this.
s = listen(1234)
try
local c, cs
@sync begin
@async cs = accept(s)
c = connect(1234)
end
try
i = 0
data = zeros(UInt8, 1000)
while true
t = Timer(_->close(c), 1) # Close connection if `write()` blocks for more than 1s
write(c, data)
close(t)
i += 1
println(i)
end
finally
close(cs)
close(c)
end
finally
close(s)
end
This works as intended on my Mac - once the buffer is full, the script waits for one more second and then closes the connection and thereby triggers the write()
to fail with an IOError
. However, this does not seem to work on a Linux machine. After adding some more logging statements, I’ve concluded that on Linux the close(c)
itself hangs indefinitely, and of course with that also the write()
hangs forever. What’s the recommended way to achieve the desired behaviour on Linux? In particular, can this be done at the Julia level or do I need to go down to the libuv level to handle this scenario?