Something I find myself doing often is write an algorithm in Float64 (without bothering with types, as I would in Matlab say), and then try to change the default precision to Float32 or BigFloat (to test the sensitivity to numerical errors, to see if I can gain in performance…) Julia makes it very easy to write type-agnostic functions, but typically
- I will not bother writing fully generic functions, and will have a number of zeros(n), randn(n) or the like in my functions
- The top-level script still has to pick a type, and will call zeros(n), randn(n) etc.
The correct way to go about trying my code in Float32 / BigFloat would be to make sure each of my functions is type-agnostic, define a global DEFAULT_TYPE constant and use that in my top-level scripts. This can be done, but is a bit annoying (and requires thinking explicitly about types when coding)
Wouldn’t it be great if this could be done with a one-line change at the top-level of the script? (or with a command-line option) The only thing that would have to be changed is that a number of functions (zeros/randn and many others) produce Float64 by default. If I understand correctly https://github.com/JuliaLang/julia/pull/23205, this could be done by having a DEFAULT_FLOAT_TYPE() = Float64 function somewhere, replacing all explicit uses of Float64 in Base with this, and then overwriting that method. Would that be feasible?
A concern is that it would force recompilation of all code, including library code (I want my computation to be done in BigFloat, but not necessarily all the internals in the plotting library, say). This might be acceptable, though.