Just a further comment about size of registry transfers. You are talking about hundred of megabytes, which hints me that it is a Git tar that is downloaded (with 1.7, the tar is only 3MB). If you are on 1.6, you could try to change registry type by registry rm General / registry add General which should change the transfer to tar of data only, if I am not mistaken.
Not really an edge case since systems where antivirus software slowly checks every file update has similar characteristics. That is why Julia 1.7 introduces and defaults to the option of not unpacking the registry at all. I recommend that you try the Julia 1.7 beta to see if it works better for you. You should make sure to remove your unpacked registry though, e.g. pkg> registry rm General.
That’s the amount of memory allocated by Julia via GC during the timed operation, not the size of anything transferred. Git updates transfer less data than tarball updates because git only sends the changes since the last update (with history, so it’s a bit larger than just sending a patch, but still). The negotiation can take a while but it should not be transferring hundreds of megabytes of data.
Simple: I checked the network data usage on my mobile phone (network is shared via wifi hotspot) before and after the registry update. And it increased by roughly 100 MB. It is good to know that it shouldn’t take that much.
Thanks, @jdad, My registry copy was inherited from Julia v1.0, I wasn’t aware that this could cause an issue. I tried what you suggested, and I will check the network usage for future registry updates.
Yes, I have an unpacked Git repo, and I’m also surprised why the update used so much traffic. I agree it shouldn’t. I’m hoping that it was just a local quirk, and the solution suggested by @jdad, i.e., remove and readd the General registry should solve that.
If the last update was a very long time in the past then the incremental update might have been large. Frequent incremental updates should not be that large.
My post was mainly about the time (about 8 seconds compared to frederik’s 0.04 seconds, albeit on a different Julia version). I think my last fetch was only a few days ago.
Personally I don’t see why I should care at all about the data transferred.
Offtopic: is there a good technical explanation somewhere online for why Windows filesystem operations are so slow? I don’t understand why they’ve tolerated this being so bad for so long.
I think the basic answer is that ntfs is 25 years old. Mac got hfs+ in 2005 and apfs in 2015(ish) Linux got ext3, ext4, and now zfs. Windows just never got a good file system (probably for backwards compatibility reasons). Open zfs runs on windows though, so I have some hope they will over time switch to it, but that’s fairly optimistic on my part.
…
While I didn’t realize it at the time, the cause for this was/is Windows Defender. Windows Defender (and other anti-virus / scanning software) typically work on Windows by installing what’s called a filesystem filter driver. This is a kernel driver that essentially hooks itself into the kernel and receives callbacks on I/O and filesystem events. It turns out the close file callback triggers scanning of written data. And this scanning appears to occur synchronously, blocking CloseHandle() from returning. This adds milliseconds of overhead. The net effect is that file mutation I/O on Windows is drastically reduced by Windows Defender and other A/V scanners.
As far as I can tell, as long as Windows Defender (and presumably other A/V scanners) are running, there’s no way to make the Windows I/O APIs consistently fast.