this is my first post, please be kind.
I want to test hypothesis where the allocation happens and time is spent in my functions. I imagine doing it the following way:
a = 0
@time for i=1:1000000000 a = x^x end
Having to introduce a loop so you see what is going on is one reason why @btime is usually preferred. However
@btime a = x^x
Causes Julia much pain:
ERROR: UndefVarError: x not defined
My versioninfo() is:
Julia Version 1.0.3
Commit 099e826241 (2018-12-18 01:34 UTC)
OS: Linux (x86_64-redhat-linux)
CPU: Intel(R) Core(TM) i5-8XXU CPU @ X.XGHz
LLVM: libLLVM-6.0.0 (ORCJIT, skylake)
Any ideas how i can get a better idea about the performance behavior of parts of my program?
As far as i understand it:
@btime is expanded as a macro and executed at parse/compile time, at which x is not known
I do not understand why my first case works at all since it’s a macro too.
Sounds like profiling would be better suited to your task. There is a section on it in the manual.
PS: looks like you messed up with the
`-quoting, probably at the top.
For profiling, this package may be helpful. (Disclosure: I wrote it.)
Thank you for your pointers.
I was already aware of the extensive Profiling features in Julia.
The problem is they do to much and break my workflow.
I like to make a hypothesis, where something is going wrong and then check it.
@tkluck while your package indeed presents a nice gui, is there a way to make it check allocations too?
I absolutely love that attitude – as I’m sure you’ve seen, many junior developers attack hard-to-solve bugs by just trying to change things randomly, adding verbose logging, or stepping through. Bisecting through structured, repeated hypothesis validation/refuting is one of the most productive things to learn.
However, for finding performance issues, there’s absolutely no way to do it without data. Statistical profiling and flamegraphs need to be part of your workflow. It’s absolutely worth getting used to!
is there a way to make it check allocations too?
Not at the moment: julia’s
--track-allocation option is a bit less ergonomic than its statistical profiling. I usually just eyeball its output through
cat *.<pid>.mem… I’d love to add it as a feature to the package, though!
The reason for your issue is that
@btime is designed to operate at global scope, so any variables it references need to be globals. That doesn’t work in your case because
x is a local, not a global variable. Fortunately, this is easy to work around: you can use
$ to interpolate the value of
x into the expression being benchmarked:
julia> function f(x)
f (generic function with 1 method)
0.024 ns (0 allocations: 0 bytes)
Note also that this isn’t a general property of macros, but just a specific result of the fact that
@benchmark are designed to treat all variables inside the expression you’re benchmarking as globals.