Why no Float type alias?

Why doesn’t Float alias to the machine bit size like Int?

julia> Int

julia> Float
ERROR: UndefVarError: Float not defined
1 Like

There’s no need for it.

Float64 is always a 64-bit type and Float32 is always 32-bit etc. and Int64 is always 64-bit and all types are on all platforms, but Int is usually 64-bit (not strictly an alias, so don’t really on it being more than Int32), i.e. except for on 32-bit platforms, where it’s always 32-bit. If you need 64-bit or larger integers, always use Int64, Int128 (or their unsigned types) or BigInt.

1 Like

I would think Float == Float32 on 32 bit systems and Float == Float64 on 64 bit systems, so the precision depends on the system. Why is this behavior needed for Ints but not Floats?

1 Like

32 bit systems usually still have Float64 hardware. The 32 bitness is related to the address space, and is independent of the parts of the chip used for doing floating point math.


No, it’s never been that way, Float64, aka double in C and C++ existed on 32-bit systems, even 16-bit.*

For array indices you want to use Int, i.e. the machine integer you have. If it were Int64, then twice as large on 32-bit and that’s not needed or wanted.

If you want e.g. to support money, then Int is the wrong type.

* Actually the history is more complex, double may have been 80-bit, even on those systems, but 80-bit is outdated and slower on current platforms, and not supported on all platforms Julia supports. I will not explain pre-IEEE hexadecimal floating point (still used in FDA standard files) and and 36/72-bit binary floating point of ancient systems.

36-bit floats in some form might make a comeback in some form, though I bet against it: Simplest of All


My understanding is that you can use any precision on any system by grouping bytes. The question is why there is an easy option to use a machine’s native precision for Int but not that option for Float. The difference between Int and Float is just how that string of 32 or 64 bits is interpreted right?

1 Like

32 bit systems have 64 bit FP registers


I swear I tried to search before posting …


In terms of storage, yes. But CPUs for at least the past couple decades have all had specialize hardware made specifically for accellerating math on various floating point datatypes, and these are specialized to the size of the data. This specialized hardware has existed for Float64 for a very long time.

This hardware is also why Julia can’t just decide how floating point math works (if we want that math to be fast). We are ultimately at the mercy of existing hardware decisions when it comes to things like the behaviour of 0.1 + 0.2 != 0.3


It was a valid question, and to not necropost on the other, which I didn’t recall, I do support Julia explicit (about size) and I might make a type P16, because I think Posit16 might be a happy medium of sizes (see SoftPostit.jl), but not as an alias of Posit16, but with my own extension of it (base 30), stay tuned…

Some background (right now I’m focused on smaller floats for neural networks, but general purpose might needs Posit16, with or without my twist):