The general rule when doing arithmetic on numbers of different types is that they each get promoted to a common type. For Float32 and Float64 the former gets promoted to Float64 and the latter stays unchanged.
In other words, you get
What happened? Actually 1.0f-45 is not 1.00000000000f-45 but rather
Setting underflows to zero is one of those things that “works great, except when it doesn’t”. I’ve never seen the performance benefit amount to much and @Sukera makes a very sound point about the risks.
As a counter-example, I’ve seen order-of-magnitude slowdowns from subnormals in real applications involving wave simulations, where the solutions can be exponentially small in large fractions of the domain until the wave hits them. Other researchers have published similar results.
You should never disable subnormals in a library, for the reason @Sukera pointed out — the setting is global to the process, and libraries should be composable by design. However, it’s not so strange to do so in an application you control. It’s extremely rare in my experience for correctness of an application to fail when subnormals are flushed to zero, since if you are working with such small magnitudes then you are surely seeing lots of underflow effects already and your algorithm must be generally robust to underflow if it is working at all.
Of course, it’s not a black/white issue at all - but it’s also important to note that disabling subnormals is not a performance panacea. Whether computing them is actually slower or not strongly depends on how it’s implemented in hardware, and how many normal-subnormal interactions (which are usually what makes them slow) you end up with in your particular algorithm.
So disabling them can unfortunately only be done on a case-by-case basis.
I had a case last week when a colleague found that some of her neural networks were several times slower than other ones when running inference in ONNXRunTime on CPU. It turned out that the slow ones had subnormals among the weights. There the solution was trivial though, just flush those coefficients to zero and re-save the models.
Thank you @DNF and @stevengj for the explanation. Looking at the Julia documentation you referenced Treat-Subnormal-Numbers-as-Zeros I think I am safe to use set_zero_subnormals(true) in my code. My code has similar behaviour to the example in the documentation so I may get a performance improvement.
Subnormals could be documented better. They are, and I have the get_zero_subnormals() function in mind, which is in docs, just it not mentioned there. So I’m thinking is it very rare to actually use it?
I see in docs “but incur a performance penalty on some hardware.” Isn’t that on most or all hardware? Or do you have any counterexample? Or the opposite, is it known fast on most by now, x86, and ARM?
It’s actually global, but to the thread, not the process. Likely doesn’t matter if (only) set at start of program. Also documented (but only elsewhere in docs) for get_zero_subnormals():
This function only affects the current thread.
Your statement made me think though, is this done much, and then would you set once in your program, at the start, for the main thread (and well if using Julia with many threads, a non-default option, that I think the others would inherit automatically), and forget about it. I.e. it would apply to your program, including (Julia) libraries you call, and e.g. if you call to C or Python etc. (but not when calling Java, with JavaCall.jl or for MATLAB.jl, i.e. languages where interop is implemented in a different process, like they do, on same or different machine).
Since libraries should compose, would you assume most would work with subnormals off, with that non-default setting, at least for all in Julia standard library?
I tried to find how you would set the FPU environment (for this) from C or C++, but I wasn’t successful, only seeing how done with an x86 instruction. I suppose they and other languages have the capability too. Or this is an indication that this is not much done, or thought about with those languages, just assume what the hardware provides.
And some hardware, at least older ARM didn’t support subnormals, while I believe all recent does, is fully IEEE conformant, but still has the option of disabling.
The temptation to not support subnormals is there since it complicates the hardware. At least addition and multiply are rather straight-forward without this (and e.g. NaN) complications. I haven’t thought through mow much more complex it is to implement it in hardware, that is over just detecting subnormals and trapping (or flushing to zero). Which I believe is the norm. It can always be implemented by trapping in software (firmware the right word here?), and I think done since this is assumed to be very rare anyway.
Even if you have a counterexample where mul and/or add, or muladd support in hardware, and do not trap, I have low confidence that e.g. square root would. So for a stronger claim, do you think any hardware supports all operations in hardware and (as) fast with subnormals? If not I think we can change the docs.
[I recall in older ARM (then only integer) multiply took many cycles depending on the numbers involved. By now, most if not all, CPUs have float (and I believe integer) multiply single-cycle. Except for subnormals, I doubt any, nor any GPU, can do that.]
Seems redundant with their isnormal, we have similar to that (well the opposite), and both of those are slightly changed in C++23 (just to make general? with constexpr).
That’s some very unusual code (in a loop) in the docs:
set_zero_subnormals(iseven(trial)) # Odd trials use strict IEEE arithmetic
Is it just to illustrate the timing difference, I mean you would never do for performance or other reasons to alternate, as I said just set globally and forget. In fact, is the setting of the FPU default assembly instruction know to be fast? I’m just not sure it is, or would trust that, so a dubious example for the performance section?
In some applications, an alternative to zeroing subnormal numbers is to inject a tiny bit of noise. For example, instead of initializing a with zeros, initialize it with:
a = rand(Float32,1000) * 1.f-9
Is this trick much used? Also more likely for Float32/lower precision? Are subnormals less valuable for Float64, since higher precision to begin with?
@fastmath is still there in the language (but I know recently changed, not sure if the docs are outdated on that), but at least this is outdated (my PR to disable that option siletly was merged a long time ago):
Here, the option --math-mode=ieee disables the @fastmath macro, so that we can compare results.
I find an application of set_zero_subnormals to more than a small block of code to be spooky. Not nearly as terrifying as the reckless application of @fastmath, but still. I also don’t expect it to be a profound improvement for performance outside of pathological cases where subnormals form a significant fraction of your inputs/outputs.
Subnormals make up a tiny fraction of the representable numbers and lie at the very edge of representable space. For Float64, there are 616 orders of magnitude for normal numbers and 16 for subnormals (2.6% of the dynamic range, or 9% for Float32). They mostly emerge from the sum/difference of very small numbers (smaller than floatmin(T)/eps(T)) and products/quotients that almost-but-don’t-quite underflow to zero. If numbers are trending towards gradual underflow at some vaguely-consistent rate, they may pass through the subnormals but won’t spend long there. So, typically, you’ll only pay the higher cost a small fraction of the time. But in a scenario of gradual underflow (or overflow) you should also consider whether you can move the calculation to the log domain to not underflow at all. If a lot of values are landing in the subnormals, you can also consider re-scaling your problem so that they don’t. If you’re working in Float32 and are concerned about subnormals, consider moving to Float64 if feasible.
Though, if you have a model parameters that end up subnormal but that you don’t need to keep, by all means go ahead and zero them. At that point I’d probably just zero everything “very small” as well. I’ve never seen this happen in my work, but I’m sure it does for some people.
My complaints aside, the cost of calculations with subnormals can be staggering. Here I see a 15-100x performance difference on my hardware.