What is Core.Intrinsics.slt_int? As seen in base/sort.jl, added 2013.
Also, how can I find out the answer to this sort of question in general? I tried @edit, ?, web searches, and even git blame (a much less ideal solution), but none came up with anything.
Emperically:
using Test
values = Float64[
0.0, -0.0, 1.0, -1.0, pi, 1/pi, Inf, -Inf, NaN, -NaN,
rand(), 1/rand(), -rand(), -1/rand(),
reinterpret(Float64, reinterpret(UInt64, NaN) | 78782),
-reinterpret(Float64, reinterpret(UInt64, NaN) | 78782)]
@testset "slt_int == isless(reinterpret(Int, ⋅))" begin
for a in values
for b in values
@test Core.Intrinsics.slt_int(a, b) ==
isless(reinterpret(Int64, a), reinterpret(Int64, b))
end
end
end
Test Summary: | Pass Total
slt_int == isless(reinterpret(Int, ⋅)) | 256 256
It is an intrinsic that implements signed less than for integers. That particular line is a bit tricky since it applies integer intrinsics directly to floating-point values. Normally, I’d express that by reinterpreting to an integer type with the same size and then doing the normal isless comparison, but this tricky way to do it leverages the fact that intrinsics generate the right sized instructions based on the storage size of the arguments they get. That test you referenced verifies that slt_int has the same effect as that reinterpretation and isless comparison for floating-point values.
Intrinsics are an internal language implementation detail, not part of the official surface API, so they’re not documented. Code generation for slt_int is defined in this line:
That code is 9 years old, I think it’s fine to not quite remember why it was written this exact way If it wasn’t specially documented, there probably was not a specific reason (though that’s a very fuzzy indicator, so if it there was a reason and it can be documented now, that’d be good too - and even more amazing if future additions have a nice comment/git log trail).