Why is the abstract type for an integer called Integer, meanwhile the abstract type for a float is AbstractFloat? This seems like an inconsistent naming pattern: either changing Integer to AbstractInt, or AbstractFloat to FloatingPoint, would be more consistent.
Int and UInt are aliases for the system’s native integer types such as Int64 and UInt64, but there’s no such thing for Float to refer to the system’s native float type such as Float64—why not?
Float64 “double-precision” floats are supported on virtually all “standard” computers from the last several decades (eg, much longer than 64bit has been a mainstream thing). The bit-width of a system is orthogonal to its floating point faculties. Some systems even support 80bit “extended double precision” floating point arithmetic, although that is rare nowadays.
If the bit-width of your integers is important, then you should specify them explicitly (there’s no sense using Int32 on any system if the result will be wrong, even if Int64 would require non-native operations). But when either is sufficient, it seems Julia has defaulted to the system native choice. There’s nothing wrong with manually specifying integer types, if you prefer, it’s just that much of the time the user may not particularly care.
You don’t really default to either, but while you may default to Int64 on 64-bit, it’s not because it’s faster (it’s not), it because you want to be able to address all memory, and it’s less likely to overflow. I actually believe Int32 (even on 64-bit) would be a better default to just store numbers (assuming it would have overflow checks, which can be made fast).