Update stdout while a function is running

I’d like to print to stdout continually while a function is running in parallel. It seems that this isn’t possible with asynchonous programming alone, since the scheduler works by interrupting and resuming tasks that are run one at a time. I believe it should be possible with either multi-threading or multi-processing, but I haven’t had success. I think the main issues is that my tasks are not uniform and that I do not one to wait for another. The function itself won’t necessarily be possible to break into segments.

Here is my best attempt with Distributed.jl:

# Add a worker
(w,) = addprocs(1)

# Assign work
@everywhere function f()
        sleep(1)
        s = BigInt(999)^10_000_000 % 17
        sleep(1)
        return s
end

r = remotecall(f, w)

while !isready(r)
        print("_")
        flush(stdout)
        sleep(0.1)
end

println()
println(fetch(r))

rmprocs(w)

Here, text is only printed while the worker is sleeping. Do both tasks need to be performed by workers for them to be run in parallel?

My goal is to draw a spinner on the command line that terminates when a command is finished running. ProgressMeter.jl has a related feature, but to my understanding it is not done in parallel. Instead, the spinner has to be updated and redrawn at points within the function. Ideally, I want to draw a spinner that runs during a function without modification. I’d be grateful for any direct answers as well as for ideas on different approaches.

1 Like

I modified an example , replacing r = remotecall(f,w) with the following:

r = Future()
@async put!(r, remotecall_fetch(f, w))

This seems to work as expected. I had struggled with this issue for a week, but I guess all it needed was some rubber duck debugging :duck:

Slightly related, I just learned about GitHub - AshlinHarris/Spinners.jl: Command line spinners in Julia with decent Unicode support and remembered your question here, which I tried to solve but failed because of isready always blocking, despite anything I did including your solution but didn’t work for me.

See also yield() (which is better for this than sleep()).

1 Like

Thanks for looking over it! I left out the using Distributed statement, and there could be additional issues. Here is more complete example that shows the difference between methods:

using Distributed

println("Setting up a new worker process...")
(w,) = addprocs(1)

# Assignment for worker
@everywhere function f()
        sleep(1)
        s = BigInt(999)^10_000_000 % 17
        sleep(1)
        return s
end

# Work for main process
function print_continually(r)
        while !isready(r)
                print("_")
                flush(stdout)
                sleep(0.1)
        end
end

# Notice the gap during calculation, ...
print("Method 1: ")
r1 = remotecall(f, w)
print_continually(r1)
#println(fetch(r1))
println()

sleep(1)

# ..., but this version has no gap
print("Method 2: ")
r2 = Future()
@async put!(r2, remotecall_fetch(f, w))
print_continually(r2)
#println(fetch(r2))
println()

# End
rmprocs(w)

The idea is the main process stops printing while the worker is calculating with Method 1, but not with Method 2.

Great advice! Here, I’m using sleep in the print cycle to space out frames of an animation, essentially. The call to sleep by the worker is just there for debugging - it helped show me whether tasks are truly concurrent, or just asynchonous.

1 Like

The problem is, that isready blocks, even if you do it as described in the docs:

help?> isready
  ...
  isready(rr::Future)

  Determine whether a Future has a value stored to it.

  If the argument Future is owned by a different node, this call will block to wait for the answer. It is recommended
  to wait for rr in a separate task instead or to use a local Channel as a proxy:

  p = 1
  f = Future(p)
  errormonitor(@async put!(f, remotecall_fetch(long_computation, p)))
  isready(f)  # will not block

The

# will not block

is not true for all my tries.
What worked for me was the loop

while isnothing(r.v)
    ...
end

instead of if !iseady(r) but clearly this is not how it should work.

EDIT: The spinner no longer terminates on Windows, regardless of Julia versions. After kill(process), the process status remains permanently as ProcessSignaled(15) and never transisitions to ProcessExited. Everything worked a few weeks ago…

It looks like running the task as an external program is the best approach, in terms of simplicity and performance:

function spinner()
        local p # Process
        try
                # Generate the spinner command
                c = "while true;" *
                "for i in \"\\\\|/-\";" *
                "print(\"\\b\$i\");" *
                "sleep(0.1);" *
                "end;" *
                "end"

                # Display the spinner as an external program
                p = run(pipeline(` julia -e $c`, stdout), wait=false)

                # Do some actual work
                s = 0
                for i in 10:17
                        s += BigInt(999)^10_000_000 % 17
                end
                return(s)

        finally
                # Signal the external program to end
                kill(p)
                print("\b")
        end
end

# println(spinner()) # The process no longer terminates on Windows!

I’m grateful to the Julia community for all the help!

I finally hacked together something that updates a spinner while calculations are done and closes the process (by sending a character to its stdin):

command = "t=Threads.@async read(stdin, Char);while !istaskdone(t);for q=['\\\\','|','/','-'];print(q);sleep(0.1);print('\b')end;end;exit()"
proc_input = Pipe()
proc = run(pipeline(`julia -e $command`, stdin = proc_input, stdout = stdout, stderr = stderr), wait = false)
sum(map(i->BigInt(999)^10_000_000 % i, 1:10)); # Do some calculations
write(proc_input,'c') #Signal the spinner process to stop

2 Likes