"Continuous" logging



Hi everyone,

I am facing some issues with writing into log files in Julia. I am currently running Julia on a computer cluster (w/ Slurm). I have a relatively large file that runs in parallel across several nodes for a while, and prints convergence diagnostics after each iteration.
One problem that I am facing is that the log file only seems to be written every time the log file attains a size that is a multiple of 128 kb. That is, after I start the job, the log file seems to remain empty for a while. Then, once it reaches a size of 128kb, it gets updated and prints all the diagnostics for the iterations finished thus far. Then, it just remains at 128 kb for a while, until it reaches 256 kb and it updates again, etc. In short, Julia does not write continuously into the log file (as it would do at the REPL for example)
This is somewhat annoying, since it is often important for me to track the diagnostics for the first few iterations. While I am not sufficiently qualified to exclude the possibility, I think this is not a Slurm/cluster problem (but rather a Julia thing), since MATLAB does log continuously when used in the same cluster.
Has this happened to anyone else? Or do you think the problem is mine/Slurm? Thanks in advance!


I think calling flush on your stream after writing the log message should help.


wow, thanks a lot. this did help, and doing some more research based on your suggestion led me to conclude that simply adding “srun” before the declaration of the julia program file in the bash file does solve the problem. Thanks again!


Whoa… sorry for the question but: how come this works?


You mean flush()? There is a buffer which collects data and only when the buffer reaches a predefined limit (in this case 128KB), the output will be flushed through the I/O stream into the file. You can force it this any time with flush().


srun creates a resource allocation in which to run the parallel job, and it probably automatically does flush()


Thanks a million peeps!