OK, let’s sum up. Sorry for the long post, I’ll try to be complete.
Julia has a default precision, which is Float64. This shows up in the parsing of x = 4.2, or in x = randn(2,2). This means that any code (script or function) that uses these features will work in Float64. This behavior is standard, but by no means fundamental: there’s no particular reason to impose Float64 as default rather than Float32 after all, other than this is what’s commonly done (e.g. in numpy, matlab).
It is sometimes useful to work in a different precision. For instance, I might want to test the numerical stability of my algorithm, and test it in Float32 to see if it breaks or not. Or I might run into a particularly weird case that requires extended precision. Or I might want to experiment with doing computations in Float32. In other languages this is thoroughly annoying to do, in julia this is trivial (and efficient) as long as 1) one remembers to write generic code for all functions 2) one seeds the algorithms with the correct element type.
I’m not saying that the situation is bad. I actually don’t know any other languages where it’s as easy. However, it is annoying (at least I find it annoying, and I don’t think I’m the only one) to write completely generic code when all you want 99% of the time is to operate on arrays of Float64. It is also annoying to go hunt for constants and take care to set their types. You can ease that with macros, which is very good for a single script or function, but is annoying as well when you have many functions, each calling randn(2,2) or zeros(2,2) or whatever. Don’t get me wrong, I like generic code as much as anybody, and I know that it is bad practice (non-generic code) to call randn() without giving a type explicitly. I’m just saying that it’s the path of least resistance, and so the path everybody who’s not a base/package developer is going to take unless they have very good reasons not to.
Given all that, it would be nice if there was a way to change the default precision. (not essential, but nice). Now we come down to earth and we ask “is it implementable?” Well, it appears so:
function my_fun(n)
sum(randn(n)) #just an example but actually do something more complicated
end
display(my_fun(2))
import Base.randn
Base.randn(dims::Integer...) = randn(Float32, dims...)
display(my_fun(2))
The generated code for my_fun is as fast as if there was no overloading. So one could imagine a package (let’s not even talk about base) that would overload the randn(), zeros() etc. functions (let’s forget about the constants for the moment). Obviously such a package would be of use only as a diagnostic/experiment tool, and not to be used by serious packages.
Con: it creates a global state. I agree global states make it harder to reason about code, and can lead to subtle bugs, but they’re still extremely convenient. Usages in julia that I can find (by typing set TAB in the REPL), that affect the behavior of code: set_zero_subnormals, setprecision, setrounding. I’ve not understood the status of https://github.com/JuliaLang/julia/pull/23205, but if it’s merged, then add that to the list. The number of threads is also a global state.
@yuyichao, you feel very strongly that this would be a bad idea, even as a registered external package (which, as far as I can tell, doesn’t imply any sort of acknowledgement from the julia devs about the quality of the package, and whose bugs are the responsability of the package developer, not julia devs). You take a similar position in the RNG PR. Is this because 1) you don’t believe in the use case 2) you don’t want another global state 3) there is something fundamentally wrong about redefining functions in Base, and it will make julia misbehave in some way? 1) and 2) are duly noted, but, with all due respect, I’m not sure why that should be grounds for rejection of a package. You seem to imply 3) in some of your comments (on this and on the RNG PR) but I’m still not sure why (and others don’t seem sure either). Could you clarify?