Delay in processing

I have a routine that has about a 3 second delay in it for some reason. Essentially, the code goes:

  1. Do some initialization stuff, create three arrays of 1920 elements each (two Char, 1 structure) and initialize them to blanks and 0’s.
  2. while true
    display some data,
    read some input
    if input is quit, exit, otherwise process input
    end

There is a significant (about 3 seconds) between the completion of the last initialization step and the first step inside the while. I have run a lot a test to try to isolate where the delay is occurring but haven’t found anything yet.

I use the library functions, libNCurses.jl to access ncurses and readdlm.jl, but these don’t seem to be the issue.

Anybody have a suggestion as to what might be going on? Could it be JIT processing?

Thanks

Please

Is this all done in global scope or in a/multiple functions?

Can you do something like:

using BenchmarkTools
@benchmark YOUR_INITIALIZATION_STEP()

and give us the output, in case you can’t provide a MWE (see above PSA).

There are two main modules with multiple functions in each. Variables, for the most part a local, with the arrays, globalized. The two modules total about 2800 lines.

Thanks, I will check out BenchmarkTools.

I have tried stripping out initialization function calls to make an MWE and cannot see an change.

Maybe also take a look at profiling. Maybe that can reveal where the delay happens

Could you maybe strip out some dummy op for the last iteration and the first while iteration?

I put a logging function immediately before and after the while statement. The logging function uses a named pipe to route message to another function that, in turn, display them on a console log. The named pipe is opened at the beginning of the program.

I have embedded logging calls at each of the initialization functions and they all display their embedded call messages immediately.

I am checking out profiling at the moment to see if I can get a clue.