Abstract and System-Native Types of Integers and Floats: Int, Integer, Float, AbstractFloat

Two questions:

  1. Why is the abstract type for an integer called Integer, meanwhile the abstract type for a float is AbstractFloat? This seems like an inconsistent naming pattern: either changing Integer to AbstractInt, or AbstractFloat to FloatingPoint, would be more consistent.

  2. Int and UInt are aliases for the system’s native integer types such as Int64 and UInt64, but there’s no such thing for Float to refer to the system’s native float type such as Float64—why not?

Sorry if this has been addressed elsewhere.

  1. is very simple. 32 and 64 bit computers both have both types of floating point, so it doesn’t really make sense to refer to one of them as the native float type.
julia> D3TypeTrees.TypeTree(Real)
│  ├──Bool
│  ├──Unsigned
│  │  ├──UInt16
│  │  ├──UInt128
│  │  ├──UInt8
│  │  ├──UInt32
│  │  └──UInt64
│  └──Signed
│     ├──Int32
│     ├──Int128
│     ├──Int8
│     ├──BigInt
│     ├──Int64
│     └──Int16
│  ├──Float16
│  ├──Float64
│  ├──Float32
│  └──BigFloat
│  └──Irrational
  1. seems more like an observation than a question. Would you add Abstract to the front of everything that’s not a leaf of the tree? I think it’s likely a tradeoff between ‘consistency’ and verbosity.

Wouldn’t the system’s “native” type generally be considered to be the largest size that incurs zero speed penalty, because the math units and data buses are sufficient for it without extra steps?

If you’re on a 64-bit system, do you default to using Float32 or Float64? why?

Float64 “double-precision” floats are supported on virtually all “standard” computers from the last several decades (eg, much longer than 64bit has been a mainstream thing). The bit-width of a system is orthogonal to its floating point faculties. Some systems even support 80bit “extended double precision” floating point arithmetic, although that is rare nowadays.

If the bit-width of your integers is important, then you should specify them explicitly (there’s no sense using Int32 on any system if the result will be wrong, even if Int64 would require non-native operations). But when either is sufficient, it seems Julia has defaulted to the system native choice. There’s nothing wrong with manually specifying integer types, if you prefer, it’s just that much of the time the user may not particularly care.


You don’t really default to either, but while you may default to Int64 on 64-bit, it’s not because it’s faster (it’s not), it because you want to be able to address all memory, and it’s less likely to overflow. I actually believe Int32 (even on 64-bit) would be a better default to just store numbers (assuming it would have overflow checks, which can be made fast).

This exact question seems to come up so often that maybe it should be in the FAQ:


See previous discussions: