Posits - a new approach could sink floating point computation


Well worth watching this video from the London Julia Meetup.

Milan Kloewer using Posits in a real ocean modelling code. You can run the code by defining Mytype=posti - or double, or single precision etc. So a good use of Julias features which means you can run the same code and compare what happens with different types


One of the things Milan asks is when are there going to be processors which have hardware support for Posits?

Note at 40:04 in that video there is a question by a Scottish voice - my good self!
And in fact I dont completely grasp the concept of Posits and am gently corrected by the speaker.

1 Like

sink floating point computation

I don’t think the presentation makes this claim. My understanding is that posit is a version of unum, which has been discussed before in relation to Julia. It addresses some shortcomings of the original unum proposal. They are a neat approach for some applications, but not magic.

I think it is important that innovations happen in this area, but fantastic claims about the advantages are probably more harmful than not. To a certain extent, I am reminded of innovations that intend to replace the bicycle chain (eg 1, 2, 3 are the ones I have seen in person), yet in 2019 most bikes still have a chain.


Thank you for making people aware of this. This is certainly exciting and it will be interesting to see where this goes. However, from the more balanced views I can find on this issue, e.g.
this preprint, it appears there are some warts.

1 Like

Tamas, but of course. Note that I was echoing the title of the article - which is of course designed to attract attention.


If unums / posits at least trigger useful and broad discussions about the numerous numerical issues that are related to floating point, they will already have achieved an important goal.

I personally don’t expect posits to magically fix codes, but they could help uncover brittle designs that happen to (sometimes) work with floating points in the context where they are used/tested, but rely on unformulated hypotheses which may break in new contexts (multi-thread / multi-process parallelism, SIMD, new architectures, aggressive compiler optimizations…).

1 Like

@milankl may be interested in the discussion :slight_smile:

Haha yes of course I am, @giordano!

Maybe to make a few points here that were already raised: Yes, posits are not a silver bullet. In most applications that I came across posits are a few bits better than floats. This doesn’t sound much, however, as in the presentation shown, these few bits can decide whether 16bit hardware (thanks to ML this is where things are heading to atm) is feasible or not. Currently, I don’t see posits replacing floats in general purpose CPUs. However, for GPU, FPGA and similar specific hardware posits could be a game changer.

Hardware implementations are the current focus of most posit research, and although floats basically have a >40yrs head start, the scientists I talked to claim that a posit arithmetic unit requires less space (simpler circuitry due to fewer exceptions - think about all the “silent/signalling NaN, ± Inf, ±0, subnormal numbers”-complexity for floats) and one even said that they can run their posit processors at significantly higher clock rates, than the typical 2-3Ghz.

@Tamas_Papp yes, posits were initially developed as unum type III, however, on the most recent posit conference in Singapore almost no one spoke about unums anymore. The focus is really on posits - and for a good reason: The “only” change they require is a new arithmetic unit on processors. In principle, you could have a MPI communication with posits today - as far as I know even through something like Infiniband.

One thing that requires more changes but also comes with a big potential are so-called quires, a generalized version of fused multiply-add: The posit standard also introduces an exact dot product (exact in a sense that it only has one rounding error at the very end), which can be achieved with reasonably small caches, a quire for 8bit posits is 32bit long, for 16bit its 128bit and for 32bit 512bit. John Gustafson has a couple of (artificial) examples where 16bit posits with quires can be as good as 64bit floats for solving a linear system. You may say that the idea of quires could also be applied to floats. Yes, indeed, but because of all the exceptions cases such a quire for floats would be unfeasible large.

To pick up the first point again: I didn’t come across a single real application in which floats are better than posits, if you find one please let me know. You can test your code with posits using the SoftPosit.jl emulator (I wrote a little tutorial how to use this emulator under /docs)


Thanks for the informative writeup.

From a cursory search it seems that Julia is the de facto experimental platform for posits, is this correct?

1 Like

I wouldn’t say so. The underlying SoftPosit library (which is developed in John Gustafson’s lab) is written in C with C++ and Python wrappers. There is a numpy-posit version, and the posithub maintains list of all posit implementations (software + hardware), and SoftPosit.jl is currently the only maintained Julia implementation.

However, I think Julia should be the experimental platform. Type-stability and multiple dispatch are very good arguments for it. As @johnh mentioned above, Julia let’s you write whole big projects in a type-stable way, such that you can call run_my_project(Float64) or with Float32, Float16, BFloat16, Posit32, Posit16, Posit8, … although the underlying code is exactly the same. Julia offers you a lot of freedom here, just declare a new primitive type, define conversions and arithmetic operations, one and zero-element and you’re basically ready to run your algorithms in whatever number format you want.


This is wild, I’ve been distracted by this all morning. I have to confess I didn’t really understand any of the descriptions here, so I found this paper which I think is pretty clear.

Maybe this is out of ignorance, but it always felt to me like floating point was written on the back of an envelope one day without being given much thought and we’ve had to deal with the consequences ever since (not that I ever could have fathomed an alternative).

There’ve been good caveats here against getting too excited, could someone give a clear an concise explanation or list of the disadvantages? I can’t help but feel that any numbers that 1. are at least as accurate as floating point in the vast majority of realistic cases and 2. provide a massive simplification in hardware implementation are a clear win, so I’m having a very hard time thinking of why you’d ever prefer floating point.

Of course, it’s so cool that in Julia you can just run soft posits through any code and it “just works”. Awesome.


Hah. I wonder how many millions of envelope-backs William Kahan has in his filing cabinets. It’s really remarkable that something designed in 1977 — 42+ years ago — and successfully brought together all the stakeholders has held up so well to this day. Here’s a small snippet…

I’ve had fun learning about the esoteric edge-cases of IEEE arithmetic when Julia butts up against them; there are extraordinarily few places where IEEE got it “wrong”. https://github.com/JuliaLang/julia/search?q=kahan&type=Issues


Just to be a little clearer on what I meant: It is abundantly obvious that the actual behavior of floating point (by which I mean the operation tables) was incredibly well though out, or at least got that way not long after its inception, but what I found dubious was whether the format had much of that in mind when it was initially conceived. Again, not making any actual claims as to the history of it, just speculation.

Somewhat o/t, but the original article links to a presentation by Peter Lindstrom which discusses and compares PDE solutions obtained with posits, IEEE numbers, and his own project, zfp. The idea is to exploit the autocorrelation of typical 2D and 3D scalar fields by compressing 4x4x4 chunks (aligning to the largest exponent in each chunk, performing a DCT-like orthogonal block transform, and variable rate encoding of the resulting bit planes) for remarkable reductions in error (even compared to posits, and without the need for new hardware):

There isn’t much HPC-scale PDE development in Julia yet (besides the Oceananigans folks), but a zfp wrapper package would be worth the effort once the PDE community is a bit larger.

Kahan has a paper (slides here) which gives a (rather scathing) critique of the original book that introduced unums. It is mostly focused on debunking the claim that error analysis would not be needed with unums.

Posit seems to be a shift of focus from interval arithmetic to manipulating precision, so the above is maybe not that relevant.

There is no need to speculate, as you can easily look up the history. When Intel hired Kahan in 1976 for what eventually became IEEE 754-1985 (note: 9 years later), binary representations of real numbers already had a 20–30 year history to inform the process.

Note that new formats were added in 2008, again, after careful consideration. There are rather lengthy processes, and involve a lot of stakeholders. One can debate the merits of IEEE 754 or propose potentially better alternatives, but it is hard to claim that no thought went into the design.

A lot of this discussions about alternative floating point representations seems to be about chasing precision in 32 bit or less. It is interesting to follow, but may not be directly applicable for users who use 64 bits (eg most people on a CPU).


I think posits are an interesting idea, but I’m somewhat skeptical: a lot of the examples proposed by Gustafson seem a little too convenient, or rely on arguments appealing to a sufficiently smart compiler to do various stability transformations. I think more real-world test cases like @milankl’s are useful to identify the exact benefits.

Two interesting articles on posits I’ve liked quite liked are:


Especially having to rebuild 50 years or so of error analysis as suggested in https://hal.inria.fr/hal-01959581v4 is certainly a daunting task, if not outright terrifying.


Yes, this article was quite illuminating; an especially horrifying remark was:

When posits are better than floats of the same size, they provide one or two extra digits of accuracy. When they are worse than floats, the degradation of accuracy can be arbitrarily large. This simple observation should prevent us from rushing to replace all the floats with posits.

Another interesting tidbit from the article was the fact that we’ve gone through this whole thing already, more than 30 years ago (just as Gustafson’s unums re-invented well-trodden problems of interval arithmetic):

In 1987, Demmel analyzed two number systems, including one that has properties close to the properties of posits. The issues raised by Demmel still apply with posits: reliability with respect to over/underflow has been traded with reliability with respect to round-off.

The attention given to Gustafson’s floating-point proposals in the last few years seems disproportionate to me, driven by a lot of questionable marketing to the ill-informed.


I think that to a certain extent it is also pandering to the error analysis guilt complex: it is a skill that relatively few people acquire, yet many are vaguely aware that it would be needed in some cases, and are afraid of doing something stupid without it. The promise that you don’t have to do it provides relief, which is why it is appealing.

Fortunately, in Julia one can do a heuristic check of numerical errors for well-written code by running the analysis with higher precision (eg BigFloat) on random values and comparing the results. It is not as formalized, but very cheap to do (can even be automated) and can provide some reassurance at a much lower cost.