Start up costs when using remotecall_fetch

Hello,

Is there a measure of the start up costs when using functions such as pmap or remotecall_fetch?. By start up cost I mean the time elapsed from the instant in which the remotecall_fetch command is called and the instant in which the worker starts doing it’s job.

Thanks!

Are you trying to measure network latency? That sounds like an odd number to need. I also suspect it isn’t available, because it would require the two machines to have the exact same time, which is near impossible if not impossible, so your margin of error would be whatever the difference in time of the two machines.

Your best bet might be requesting a boolean from the other end and dividing the time to retrieve the value by 2, that should be a fair approximation of the delay between making the request and the remote end receiving it. I’d guess the margin or error would be around the same margin of error of trying to sync the time on two machines.

Yes that’s correct, I’m interested in network latency. I have some problems, which are light to compute, and when using functions such as remotecall_fetch or even @parallel, I was noticing that most of the total time wasn’t spent by workers doing work.

I guess maybe something as threads could work. Thanks!!