@GunnarFarneback can you exemplify your usage of Float32? How having Float always point to Float64 impact your workflow? I am doing image processing all the time too, but working with arrays instead of RGB, HSV, …
Again, the proposal doesn’t change the meaning nor the existence of Float64 and Float32. It just saves us from typing these suffix numbers all the time, which makes code look more complicated than it is.
Int does the job for Int32 and Int64 and no one complains, even though it is more subtle and a function of the host machine. Float would do the job for Float32, Float64, etc. but always point to Float64, which is the most portable of all.
For me the explicitness that Julia carries with it is very welcome. The notion that Float32 or Float64 is non-julian seems odd to me since being explicit is normally preferred in Julia. I have a slight distaste for Int and prefer using Int32 and Int64, while using Int only when interfacing with C code that is platform dependent. For me code should always be written in a way that makes reading it easier than writing it, so I prefer being explicit event though I have to type more.
Again there is no natural Float type and you might want to choose Float32, Float16, Float64, Float128 depending on your algorithmic needs. In a loot of use-cases the using the extended precision provided by Float64 is not necessary and you can use Float32 and have correct results.
As a side note, please don’t make sweeping general statements. Instead make statements about your personal experience.
This is an amazingly tedious thread. It’s been pointed out that this has been discussed at least half a dozen times previously. The reasons for the status quo have been given – yet again. None of those reasons have really been refuted, and the only counter-argument boils down to “I don’t want to write digits”. You don’t have to – just define const Float = Float64 in your own code.
This can cause some problems though. Sometimes if your dispatches require Int64 (admittedly too strict), then your code will fail on 32-bit machines if someone tries to pass something like 1 since that will be an Int32. Using Int stops a lot of these problems.
And this isn’t just theoretical. I changed a bunch of Int64’s to Int to fix Windows 32-bit tests for Optim.jl not too long ago. So Int is very practical.
Float on the other hand, can always be replaced by Float64. So there is no true analogue to Int in the floating point domain. Instead, it would be weird if Float64 didn’t have the bits, because then it would be the only fixed-length numbers which don’t mention their bit length. All of the signed + unsigned integers do it, every other float does it, just Float would be the odd man out.
float and double are a hangover from the days when floating point formats differed between computers (essentially the only requirement of the C is that float is a subset of double). Fortunately, these things are now standardised (as IEEE754 binary32 and binary64) so it makes sense to give them names that reflect this.
Also, double is simply a terrible name (“double” of what?). At least Fortran (where I assume it originates) was more descriptive, calling it double precision.
My apologies, I must admit I definitely had not intended to restart an old argument and beat what I honestly had no idea was a long-dead horse. Thanks for your patience and effort explaining
Honestly, it just seemed odd to me that Int and Float didn’t have parallel meanings and as I’ve been using Julia for a while now without understanding the difference (or reason) I just wanted to bring it up as either (a) a potential future change if many agreed, or (b) an addition to some FAQ section like @stevengj suggested to prevent any confusion especially for Julia newcomers.