Because there’s no floating point types that’s machine type dependent to begin with. At least not in the same sense as integer types. None of the relevant architectures has 32/64 bits variances that treats floating point differently depending on their sizes.
Even on 32-bit machines, floating point registers are typically 64-bit, so there’s no real advantage to having a Float type that is depending on the processor’s integer word size.
(Even back with 16-bit processors such as the 8086, with the 8087 floating co-processor, in 1980!, 64-bit floating point values were standard, and the floating point registers internal to the 8087 were actually 80-bit)
Unless you have a particular reason to work with a specific concrete type, you may want to write your functions for AbstractFloat type (or even, if you feel bold and know what you do, do not annotate at all the type of arguments), this allows you to write way more general and more Julian code that will work out-of-the-box with a lot of types you didn’t consider.
In that case, doesn’t it make sense to make Float an alias for Float64 in any case? I should not need to have to think about the number of bits if I don’t want to, for the majority of calculations.
So then, why not rename AbstractFloat to Float? In my opinion, it is indeed a little counter-intuitive and confusing that so many scalar abstract types are almost all denoted simply by Int, Real, Signed, and so on. Knowing this, it would seem to make sense (at least to me) that the floating point abstract type should be denoted as Float, instead of AbstractFloat. Style-wise, it just feels really inconsistent.
Also — and I know this is annoyingly whiny — but having to type Abstract before the type is actually kind of irksome in my opinion, as it doubles the number of characters in the word. This becomes especially cumbersome when trying to write a lot of efficient type-specific code.
I have a hard time imagining where it’d make sense to use AbstractFloat unless you’re actually compensating for floating point rounding with eps() or nextfloat() or similar. There are very few methods that work on all floating point numbers but wouldn’t also work on, for example Rational or all Reals. In many cases you can even widen definitions the whole way to Number.
Restricting methods to ::AbstractFloat arguments will likely artificially limit your code’s usefulness. And using ::AbstractFloat as the type of a field in a custom type or as a type parameter like Vector{AbstractFloat} is a major performance trap. So, yes, the name may be a little annoying, but it also isn’t all that useful.
We did try various other names for AbstractFloat and they were pretty confusing. This name has the demonstrable advantage that it’s rarely complained about and hasn’t really caused much confusion. Calling it Float would, I’m fairly certain, cause a huge amount of grief.
Unless you have a package in which you define a new subtype of AbstractFloat parametrized by T<:AbstractFloat, in which case you’d type AbstractFloat a lot of times. Measurements.jl is an example. It would have been definitely easier to type Float instead of AbstractFloat every time, but I don’t complain.
@StefanKarpinski, the confusion would come from C/C++? I think the major discomfort from users is having to type the 32 and 64 at the end of the type, it is somewhat “ugly” and non-julian.
I don’t see a good reason to have old-fashion names in a modern language, specially if it is a very reduced number of users that will care about Float32. My proposal is to have Float point to Float64 always, and let “advanced” users touch Float32 when needed.
I guess that would be fine, but yes, C/C++ float means Float32 while Julia Float would then mean Float64 which is pretty confusing, although maybe we shouldn’t mind so much. I think the other major concern is that it supports the notion that the default floating-point type is tied to the platform word size like the default integer type is. I feel like we’d just trade questions about why there isn’t a platform-specific Float type for questions/bug reports about how Float is not Float32 on 32-bit platforms like it should be. Then again, there’s not a lot of people running on 32-bit systems, so maybe that doesn’t matter.
We learned a lot from the C/C++ community, but it doesn’t mean that we have to adhere to their naming conventions nor to give them priority over the Julia users of today. IMHO, Float is clean and portable. No one uses Float32 except people working with embedded devices and GPU stuff, which is x% with x << 100% of the users.
The argument of keeping Float32 and Float64 to be consistent with C/C++ naming, is the same as the argument of keeping { instead of begin and } instead of end. Are we Julia or C?
Having Float to always represent 64 bits, will not only make code cleaner, it will also show the intention (or lack of intention) of the programmer into writing a software where lower precision is (or is not) relevant. Do we have examples of packages where Float32 is exploited? How is it being used?
We’re not “adhering” to C/C++ naming: Float32 and Float64 don’t come from C or C++ – the names in C, C++ and Java for these types are float and double, respectively. This is precisely why using Float for Float64 is problematic: it’s actively confusing to take the most commonly used name for a 32-bit float and use it to mean 64-bit float. Since C, C++ and Java are the three most popular programming languages, this isn’t exactly a “niche” usage.
Having Float to always represent 64 bits, will not only make code cleaner
How does that make code cleaner? It’s doesn’t change anything except a name. Honestly, I don’t really care that much and I’m ready to give in on this just so that I can have a different argument about names in the future.
Please don’t (give in so easily). The problem with these kind of arguments is that the people who don’t care for the new proposal and would find it annoying to update their code because of it are less vocal in the discussion, either because they don’t know about it, or are tired discussing these for the nth time.
If anything, issues like this should just be collected, and considered by a small group with enough perspective (composed of individuals atoning for some horrible karmic sin etc) after 1.0.
No one uses Float32 except people working with embedded devices and GPU stuff, which is x% with x << 100% of the users.
Your perspective is completely different from mine. I’m doing image processing and machine learning and I see Float32 being used all the time. And that’s not only on GPUs. Being able to fit twice as many numbers into a SIMD register is a big deal on a CPU.
32-bit platforms is another matter. The only reason I ever use 32-bit Julia is that I’m stuck with some proprietary 32-bit libraries.