Calling a function at a given time?


#1

Is there a way to do this in Julia that does not poll excessively?

I know when the next transition to/from (local) standard time to savings time will occur. At that time, I would like to adjust the content of const offset_from_ut[1] accordingly. Is there a portable way to handle this that does not require more than the occasional time check? If so, can that check be automated portably?


#2

I am not a Julia guru (yet :wink:) but this sort of construct would work in other languages and I see no reason why it wouldn’t be translatable into Julia.

You know the current time and you know when you want the function to be executed. Compute the time offset between those points (in milliseconds for example). Then you could @spawn a function which starts by sleeping (sleep()) the number och milliseconds computed and then execute the task. The value could be acquired using the fetch() function.

Hope it works for you!


#3

Thank you for the information. I gave it a go:

import Base.Dates: Time

function then(secs)
    sleep(secs)
    return now()
end

# warm up is helpful before timing
now(), fetch(@spawn then(1)); 

secs = 10
called_at = now();
returned_at = fetch(@spawn then(secs));

sleeping_for = returned_at - called_at # in milliseconds
secs, millisecs = fldmod(sleeping_for.value, 1000)
asleep = string( Time(0, 0, secs, millisecs) )

slept = string(asleep[7:8], 's', asleep[10:end])
println("sleep($(secs)) really slept $(asleep)")
# sleep(10) really slept 10.074

I read that sleep() leaves the process free to do work, but there seems to be uncertainty about higher resolution (and more stable sleep()s. I have not been able to find a definitive answer to which flavors of sleeping let go of their process/thread so other work may occur.


#4

The offset you’re seeing is likely caused by the overhead of interprocess communication (and any other overhead). If you need higher granularity, a possible workaround could be to return from sleeping earlier and do continuous polling at the end.


#5

That comports with additional timing. I find that increasing the number of secs in sleep(secs) does not materially change the offset amount: sleep(25) ==> 25s044, sleep(100) ==> 1m40s041.


#6

@JeffreySarnoff
Try calling the following function: hybrid_sleep()
This function should be spot on with virtually no busy wait polling.

# ============================================================================================
function hybrid_sleep(sleep_time::Float64, threshold::Float64 = .0175)
  # ============================================================================================
  # accurately sleep for sleep_time secs 
  # ============================================================================================
  #  if sleep_time is <= threshold, busy wait all of sleep_time
  #  if sleep_time is >  threshold, hybrid sleep as follows:
  #     1) actual_sleep = max(.001, sleep_time - threshold)
  #     2) Libc.systemsleep(actual_sleep)
  #     3) busy wait remaining time when 2) completes
  # -------------------------------------------------------------------------
  
  const tics_per_sec = 1_000_000_000.  #-- number of tics in one sec
  nano1 = time_ns()                            #-- beginning nano time
  nano2 = nano1 + (sleep_time * tics_per_sec)  #-- final nano time
  const min_actual_sleep = .001   #-- do not let the actual_sleep be less than this value
  const max_sleep = 86_400_000.   #-- maximum allowed sleep_time parm (100 days in secs)
  const min_sleep = .000001000    #-- mininum allowed sleep_time parm (1 micro sec)
  
  #-- verify if sleep_time is within limits
  if sleep_time < min_sleep
    @printf("parameter error:  sleep_time => %10.8f is less than %10.8f secs!!\n", sleep_time, min_sleep)
    println("sleep_ns aborted ==> specified sleep time is too low!")
    sleep(2.)
    return -1.0   #-- parm error negative return
  end
  
  if sleep_time > max_sleep
    @printf("parameter error:  sleep_time => %12.1f is greater than %10.1f secs!!\n", sleep_time, max_sleep)
    println("sleep_ns aborted ==> specified sleep time is too high!")
    sleep(2.)
    return -2.0    #-- parm error negative return
  end

  #------ actual sleep
  if sleep_time > threshold  #-- sleep only if above threshold
    #-- actual_sleep_time must be at least min_actual_sleep
    actual_sleep_time = max(min_actual_sleep, sleep_time - threshold)
    Libc.systemsleep(actual_sleep_time)
  end
  
  #------ final busy wait
  nano3::UInt64 = time_ns()  #-- interim nano time for while loop
  while true
    nano3 >= nano2 && break   #-- break out if final nano time has been exceeded
    nano3 = time_ns()
  end
  
  seconds_over = (nano3 - nano2) / tics_per_sec  #-- seconds that sleep_time was exceeded
  return seconds_over
end #-- end of sleep_ns

You call it by one of the following examples:

hybrid_sleep(25.)   #-- hybrid sleep for 25. seconds - notice 25. is in Float64 not integers
hybrid_sleep(100.)   #-- hybrid sleep for 100. seconds

I am not sure if you are on Windows or Linux.
The second parm in the function defaults to .0175 seconds which is appropriate for Windows.
You can experiment by trying different thesholds as follows:

hybrid_sleep(25., .0065)  #-- maybe better for you op system who knows?

Let me know if this is any more accurate.
This is a hybrid solution to sleep that uses the best of sleeping and busy waiting.
…Archie


#7

way … nice of you


#8

Are you on Linux or Windows?


#9

I am in both. The purpose is to have a portable way to do this within a more general temporal context. It is ok if there is a pre-use setup that establishes some constant appropriate to the host system … if they vary more than just by OS.


#10

I happen to be on Win now, so that is where I tried the code.

For timing this way (which is how most would):
a=now(); hybridsleep(<float secs>); b=now(); b-a
the alignment is surprisingly good.

For timing this way:

nano_per_sec = 1_000_000_000
secs_nanos(nanosecs) = fldmod(nanosecs, nano_per_sec)
a=time_ns(); hybridsleep(<float secs>); b=time_ns();
secs_nanos(b-a)

there is jitter, nonethless it much smaller than with sleep().
Can that be compensated, or is it from some other source?
Just knowing the bound on that jitter would work.


#11

@JeffreySarnoff
The jitter you are seeing in the second way of timing is related to the timing process itself.

You should use BenchmarkTools to properly time things in Julia. It addresses correclty the overheads and various internal processes in calling functions.

You will use either @benchmark or @btime as follows:
@benchmark for times of 1. sec or less
@btime for times of greater than 1. second

using BenchmarkTools    #-- I assume you have already added the package BenchmarkTools
@benchmark hybrid_sleep(1.)
@benchmark hybrid_sleep(.1)
@benchmark hybrid_sleep(.01)
@benchmark hybrid_sleep(.001)
@benchmark hybrid_sleep(.0001)
@benchmark hybrid_sleep(.00001)
@benchmark hybrid_sleep(.000001)   #-- the minimum sleep time allowed
@btime hybrid_sleep(2.)
@btime hybrid_sleep(10.)
@btime hybrid_sleep(25.)
@btime hybrid_sleep(100.)

All of these examples should time with virtually no jitter.
Let me know if this is your case?

I am working on making this a Julia package so the community can easily use the increased sleeping accuracy.

Please keep in mind that for sleep times below the threshold of .0175 seconds you will be busy waiting which burns up CPU cycles. With sleep times above .0175 seconds you mostly use true sleeping which does not busy wait.

BTW, I am on Windows 10. I have run this code on JuliaBox and the threshold can be set at .0025 seconds which means less busy waiting. I assume this is because Linux handles the sleep function with less jitter.


#12

@JeffreySarnoff

In the code block below is the results of timing:

  • Libc.systemsleep()
  • sleep_ns with @benchmark macro
  • sleep_ns() with simulate program that I wrote

Note the extreme jitter in the first instance that does not show for the results of sleep_ns.

@linuslagerhjelm
Your thought about sleeping and then a final poll is exactly how sleep_ns functions!

This is only a thought, but much of the (parallel, spawning, multi-threading) algorithms in Julia are based on loops of either sleep() or Libc.systemsleep() which have large jitter. Maybe some of these poll functions could be replaced a single line substitution of sleep_ns().

The results are pasted below.


julia> using BenchmarkTools

julia> using AccurateSleep
now() = 2017-09-10T18:16:19.99
sleep_threshold = 0.0171

julia> sleep_ns(.5)  #-- warmup
2.05e-7

julia> #-- benchmark Libc.systemsleep to show the jitter

julia> @benchmark Libc.systemsleep(.001)
BenchmarkTools.Trial:
  memory estimate:  0 bytes
  allocs estimate:  0
  --------------
  minimum time:     1.031 ms (0.00% GC)
  median time:      2.076 ms (0.00% GC)
  mean time:        7.396 ms (0.00% GC)
  maximum time:     17.104 ms (0.00% GC)
  --------------
  samples:          675
  evals/sample:     1

julia> @benchmark Libc.systemsleep(.01)
BenchmarkTools.Trial:
  memory estimate:  0 bytes
  allocs estimate:  0
  --------------
  minimum time:     10.056 ms (0.00% GC)
  median time:      15.589 ms (0.00% GC)
  mean time:        15.074 ms (0.00% GC)
  maximum time:     26.210 ms (0.00% GC)
  --------------
  samples:          331
  evals/sample:     1

julia> @benchmark Libc.systemsleep(.1)
BenchmarkTools.Trial:
  memory estimate:  0 bytes
  allocs estimate:  0
  --------------
  minimum time:     100.038 ms (0.00% GC)
  median time:      109.308 ms (0.00% GC)
  mean time:        107.437 ms (0.00% GC)
  maximum time:     115.609 ms (0.00% GC)
  --------------
  samples:          47
  evals/sample:     1

julia> @benchmark Libc.systemsleep(1.0)
BenchmarkTools.Trial:
  memory estimate:  0 bytes
  allocs estimate:  0
  --------------
  minimum time:     1.001 s (0.00% GC)
  median time:      1.004 s (0.00% GC)
  mean time:        1.006 s (0.00% GC)
  maximum time:     1.012 s (0.00% GC)
  --------------
  samples:          5
  evals/sample:     1

julia> @benchmark Libc.systemsleep(10.0)
BenchmarkTools.Trial:
  memory estimate:  0 bytes
  allocs estimate:  0
  --------------
  minimum time:     10.000 s (0.00% GC)
  median time:      10.000 s (0.00% GC)
  mean time:        10.000 s (0.00% GC)
  maximum time:     10.000 s (0.00% GC)
  --------------
  samples:          1
  evals/sample:     1

julia> #-- time sleep_ns using the BenchmarkTools macro

julia> @benchmark sleep_ns(.000001)
BenchmarkTools.Trial:
  memory estimate:  16 bytes
  allocs estimate:  1
  --------------
  minimum time:     1.233 μs (0.00% GC)
  median time:      1.233 μs (0.00% GC)
  mean time:        1.240 μs (0.00% GC)
  maximum time:     3.123 μs (0.00% GC)
  --------------
  samples:          10000
  evals/sample:     10

julia> @benchmark sleep_ns(.0001)
BenchmarkTools.Trial:
  memory estimate:  16 bytes
  allocs estimate:  1
  --------------
  minimum time:     100.266 μs (0.00% GC)
  median time:      100.267 μs (0.00% GC)
  mean time:        100.342 μs (0.00% GC)
  maximum time:     160.673 μs (0.00% GC)
  --------------
  samples:          10000
  evals/sample:     1

julia> @benchmark sleep_ns(.001)
BenchmarkTools.Trial:
  memory estimate:  16 bytes
  allocs estimate:  1
  --------------
  minimum time:     1.000 ms (0.00% GC)
  median time:      1.000 ms (0.00% GC)
  mean time:        1.000 ms (0.00% GC)
  maximum time:     1.560 ms (0.00% GC)
  --------------
  samples:          4996
  evals/sample:     1

julia> @benchmark sleep_ns(.01)
BenchmarkTools.Trial:
  memory estimate:  16 bytes
  allocs estimate:  1
  --------------
  minimum time:     10.000 ms (0.00% GC)
  median time:      10.001 ms (0.00% GC)
  mean time:        10.001 ms (0.00% GC)
  maximum time:     10.008 ms (0.00% GC)
  --------------
  samples:          500
  evals/sample:     1

julia> @benchmark sleep_ns(.1)
BenchmarkTools.Trial:
  memory estimate:  16 bytes
  allocs estimate:  1
  --------------
  minimum time:     100.003 ms (0.00% GC)
  median time:      100.003 ms (0.00% GC)
  mean time:        100.003 ms (0.00% GC)
  maximum time:     100.010 ms (0.00% GC)
  --------------
  samples:          50
  evals/sample:     1

julia> @benchmark sleep_ns(1.0)
BenchmarkTools.Trial:
  memory estimate:  16 bytes
  allocs estimate:  1
  --------------
  minimum time:     1.000 s (0.00% GC)
  median time:      1.000 s (0.00% GC)
  mean time:        1.000 s (0.00% GC)
  maximum time:     1.000 s (0.00% GC)
  --------------
  samples:          6
  evals/sample:     1

julia> @benchmark sleep_ns(10.0)
BenchmarkTools.Trial:
  memory estimate:  16 bytes
  allocs estimate:  1
  --------------
  minimum time:     10.000 s (0.00% GC)
  median time:      10.000 s (0.00% GC)
  mean time:        10.000 s (0.00% GC)
  maximum time:     10.000 s (0.00% GC)
  --------------
  samples:          1
  evals/sample:     1

julia> #-- time sleep_ns using a simulate function

julia> AccurateSleep.simulate(.000001, 1_000_000)

================================================================
sleep_time (secs) => 0.000001
threshold (secs)             => 0.017100
number of samples            => 1000000
 ... generating sleep_ns samples - please wait...
 ...results for simulation...
-------------------------------------------
maximum_time   => 0.000162318
-------------------------------------
quantile .999  => 0.000002054
quantile .990  => 0.000001233
quantile .950  => 0.000001233
quantile .900  => 0.000001233
quantile .750  => 0.000001233
quantile .500  => 0.000001233
quantile .250  => 0.000001233
quantile .100  => 0.000001232
mean_time      => 0.000001240
minumum_time   => 0.000001232
------------------------------------------

julia> AccurateSleep.simulate(.0001, 10_000)

================================================================
sleep_time (secs) => 0.000100
threshold (secs)             => 0.017100
number of samples            =>  10000
 ... generating sleep_ns samples - please wait...
 ...results for simulation...
-------------------------------------------
maximum_time   => 0.000239983
-------------------------------------
quantile .999  => 0.000117117
quantile .990  => 0.000100678
quantile .950  => 0.000100267
quantile .900  => 0.000100267
quantile .750  => 0.000100267
quantile .500  => 0.000100267
quantile .250  => 0.000100267
quantile .100  => 0.000100267
mean_time      => 0.000100341
minumum_time   => 0.000100266
------------------------------------------

julia> AccurateSleep.simulate(.001, 1_000)

================================================================
sleep_time (secs) => 0.001000
threshold (secs)             => 0.017100
number of samples            =>   1000
 ... generating sleep_ns samples - please wait...
 ...results for simulation...
-------------------------------------------
maximum_time   => 0.001102525
-------------------------------------
quantile .999  => 0.001054905
quantile .990  => 0.001001436
quantile .950  => 0.001000614
quantile .900  => 0.001000204
quantile .750  => 0.001000204
quantile .500  => 0.001000203
quantile .250  => 0.001000203
quantile .100  => 0.001000203
mean_time      => 0.001000504
minumum_time   => 0.001000203
------------------------------------------

julia> AccurateSleep.simulate(.01, 100)

================================================================
sleep_time (secs) => 0.010000
threshold (secs)             => 0.017100
number of samples            =>    100
 ... generating sleep_ns samples - please wait...
 ...results for simulation...
-------------------------------------------
maximum_time   => 0.010013129
-------------------------------------
quantile .999  => 0.010013007
quantile .990  => 0.010011908
quantile .950  => 0.010001623
quantile .900  => 0.010001213
quantile .750  => 0.010000801
quantile .500  => 0.010000391
quantile .250  => 0.010000390
quantile .100  => 0.010000390
mean_time      => 0.010000929
minumum_time   => 0.010000390
------------------------------------------

julia> AccurateSleep.simulate(.1, 10)

================================================================
sleep_time (secs) => 0.100000
threshold (secs)             => 0.017100
number of samples            =>     10
 ... generating sleep_ns samples - please wait...
 ...results for simulation...
-------------------------------------------
maximum_time   => 0.100001439
-------------------------------------
quantile .999  => 0.100001439
quantile .990  => 0.100001439
quantile .950  => 0.100001439
quantile .900  => 0.100001438
quantile .750  => 0.100001335
quantile .500  => 0.100001028
quantile .250  => 0.100001027
quantile .100  => 0.100000986
mean_time      => 0.100001110
minumum_time   => 0.100000616
------------------------------------------

julia> AccurateSleep.simulate(1.0, 5)

================================================================
sleep_time (secs) => 1.000000
threshold (secs)             => 0.017100
number of samples            =>      5
 ... generating sleep_ns samples - please wait...
 ...results for simulation...
-------------------------------------------
maximum_time   => 1.000001643
-------------------------------------
quantile .999  => 1.000001640
quantile .990  => 1.000001610
quantile .950  => 1.000001479
quantile .900  => 1.000001315
quantile .750  => 1.000000822
quantile .500  => 1.000000822
quantile .250  => 1.000000822
quantile .100  => 1.000000822
mean_time      => 1.000000986
minumum_time   => 1.000000822
------------------------------------------

julia> AccurateSleep.simulate(10., 3)

================================================================
sleep_time (secs) => 10.000000
threshold (secs)             => 0.017100
number of samples            =>      3
 ... generating sleep_ns samples - please wait...
 ...results for simulation...
-------------------------------------------
maximum_time   => 10.000000822
-------------------------------------
quantile .999  => 10.000000822
quantile .990  => 10.000000822
quantile .950  => 10.000000822
quantile .900  => 10.000000822
quantile .750  => 10.000000822
quantile .500  => 10.000000821
quantile .250  => 10.000000616
quantile .100  => 10.000000493
mean_time      => 10.000000685
minumum_time   => 10.000000411
------------------------------------------

I should add that the sleep_ns() function is identical to the hybrid_sleep() function that I posted yesterday.


#13

good


#14

Cool stuff people!