Unsafe functions performance

I was trying to understand sizeof(s::String) functions to help @xiaodai to improve performance…

I was experimenting with undocumented hidden “len” field of String (it was present in Julia 0.6 and seems to be lost in Julia 0.7.0):

julia> sizof(a) = unsafe_load(Base.unsafe_convert(Ptr{UInt}, pointer(a)-8));

Benchmark results are impressive:

julia> @btime sizeof("abc")
  0.018 ns (0 allocations: 0 bytes)

julia> @btime sizof("abc")
  1.740 ns (0 allocations: 0 bytes)

Why are unsafe functions 100 (!) times slower in this test?

1 Like

Strange, even after adding alignment information (using Base.pointerref(..., 1, 8)) which then results in identical LLVM and native code, the performance discrepancy remains.

EDIT: on 0.6, both implementations are equally slow (ie. same as the slow time from OP).

1 Like

Probably some interaction with the testing framework? IPO and all that.

I think there’s something off with your testing, because the code generated (at least on master) is identical.

One, when sizeof(str) does exactly what you need here, and is generic, why do you want to peek at the internals?
Also, why you are calling Base.unsafe_convert, which is for converting something, when you really just need to reinterpret the pointer, i.e. reinterpret(Ptr{UInt}, pointer(a)-8)?

unsafe_convert(T, x)

Convert x to a C argument of type T where the input x must be the return value of cconvert(T, …).

1 Like

Yeah, 0.018 ns is pretty unrealistic of a measurement even for a simple pointer load. Trying to bisect now.

Yeah, no. I can reproduce it perfectly on master, so there’s probably something off with the testing infrastructure itself.

That’s why I like to look at raw numbers from time_ns() for benchmarking very small things! :slight_smile:

In this case time_ns does indeed show consistent results (of ~19ns, but that’s to be expected since BenchmarkTools does multiple evals/sample). However, I’d advise against recommending it, because BenchmarkTools protects against so many other common pitfalls that are common with newcomers. @btime is a vastly better tool.

I’ve bisected the issue to 1669d532de7434108f1092f34361166737706ba5 from #24362, confirming @kristoffer.carlsson’s hunch :slightly_smiling_face:


I wasn’t intending to recommend it for novice users - in my case though, I’ve had 30+ years of extensive benchmarking experience, and for that reason I like to get all of the raw data and munge it myself (which Julia makes much nicer / easier than in any other language I’ve worked on before! :slight_smile: )

Good hunch!!!