Change default precision

You make it sound like a historical accident, but it isn’t. For scientific computation, the trade-off is usually between economizing on CPU and memory use, and worrying about conditioning & catastrophic cancellation. Float64 turns out to be a sweet spot for a lot of numerical work. That’s why many (most?) languages for scientific computing use it as the default.

I think that everyone is sympathetic to the fact that sometimes you want to use other types with more/less precision (or range, for integers). There is no argument about this. It’s just that yet another global state turns out to be a bad solution.

You may disagree about this, but I frequently find that making more-or-less “global” arguments (ie something which every single function in my code use as is 99% of the time) explicit leads to improved code. The additional bonus is that it meshes well with the compilation model of Julia. In an MCMC library I am working on, I made the RNG an argument everywhere, and found it really convenient.

You make it sound like a historical accident, but it isn’t. For scientific computation, the trade-off is usually between economizing on CPU and memory use, and worrying about conditioning & catastrophic cancellation. Float64 turns out to be a sweet spot for a lot of numerical work. That’s why many (most?) languages for scientific computing use it as the default.

That strongly depends on what you’re doing, but in many applications, if your algorithm is ill-conditioned it will blow up in Float64 anyway, and if it is well-conditioned, you lose maybe 4 digits to error accumulation, which still brings you to 4 usable digits. I’m not an expert in this, but it seems to me that 32 bits was the “default” at one point: C “float” mean 32 bits, for instance. I imagine that’s because double were not widely implemented in hardware. Then hardware got fast at doing doubles, and double precision became the default for scientific computing. It seems to me that the primary reason why it is so is because it’s so inconvenient to change precision in many languages, and if you have to pick one, you pick the safest that’s reasonably fast on the architecture you target. I don’t believe it is the sweet spot for all applications: it just happens to be the most sane default to pick if you have to pick one. As a result, everybody is locked into doing everything in double precision. Julia is a good opportunity to change that and allow users to experiment with what they need for their particular use case. It is already in the current form, it would just be simpler with a global set_precision.

Again, I’m not arguing with the fact that passing arguments around is a much cleaner solution. I’m just saying that most users (not package developers) are not in the business of writing clean code, and will take the path of least resistance (which is this case means writing non-generic code, which makes it harder to change precision later on). It is not a moral failing of users. Having a way to set the default precision, like basically anything that changes the global state (default RNG, treatment of subnormals, bigfloat precision…), is of use in user code. It does not absolve anyone from writing nice and generic library code, which should not depend on the default precision anyway.

1 Like

Yes and it’s not what I prefer, it is what it really is and it makes a fundamental difference as for what you can do and what you can’t.

I don’t have particular opinion on that.

As I said, the macro that does local change to your code is perfectly fine. However, because it’s a type and not a precision and so you must not change the return types of other functions.

I wouldn’t call it “deeply wrong with you” but I believe the piece you are missing is that you don’t think types are fundamental. It is. Everything in julia relies one it.

Again, why is this exactly? Isn’t this what defaultRNG() does in https://github.com/JuliaLang/julia/pull/23205 ? Or is that PR invalid? (I’m still unclear on what the conclusion of that discussion is) As far as I understand, every function calling into a function that is redefined will get recompiled, which is fine by me. Of course it will confuse the expectations of Base and library code which are that zeros(2) will produce a Float64 array, but as the untyped array-producing functions are more user- than library-facing, I’m not sure that will be a big problem - I’ll see when I get around to implementing it.

I might be very wrong about the behavior of redefining functions, I just haven’t seen any evidence to the contrary; if I am please explain. Please understand how frustrating it is to be told something does not work and I should not do it under penalty of mysterious horrors, when all appearances, simple code, and an open PR that seems functional shows that it works fine.

An illustrative example of why it could be harmful to redefine functions:

julia> Base.similar(A::Matrix, ::Type{T}, dims::Dims{2}) where T = similar(A, Int)

julia> rand(2,2)*rand(2,2)
ERROR: InexactError()
Stacktrace:
 [1] convert(::Type{Int64}, ::Float64) at ./float.jl:679
 [2] matmul2x2!(::Array{Int64,2}, ::Char, ::Char, ::Array{Float64,2}, ::Array{Float64,2}) at ./linalg/matmul.jl:679
 [3] generic_matmatmul!(::Array{Int64,2}, ::Char, ::Char, ::Array{Float64,2}, ::Array{Float64,2}) at ./linalg/matmul.jl:478
 [4] *(::Array{Float64,2}, ::Array{Float64,2}) at ./linalg/matmul.jl:146

This I get : it is not redefining functions in itself which is problematic, it is that the redefined function will be, well, redefined, and cause bad interactions with other functions calling that function (what I call “confuse the expectations” above). My hope is that library code does not call array-building functions without specifying a type (because that is non-generic programming), and therefore this kind of things will not happen. The point I’m confused about is whether it is legitimate at all to redefine functions (with the understanding that, of course, it will change behavior)

Sorry for hopping in, but what is wrong with using the functions’ overloads to have ::Type{T} as their first argument?

Above, you mentioned about C and some historical reasons of why float corresponds to single precision. Yet, when you write in C/C++ 1.0 it is double. float x = 1.0 uses implicit conversion from double to float there. A similary behaviour can be obtained, if you are not willing to change the way you write your code when calling randn, as simply as:

function myfunc(::Type{T}, n, m) where T<:AbstractFloat
  A::Matrix{T} = randn(n,m) # or, A::AbstractMatrix{T} if you are going to use sprandn(n,m,0.1)
  return A
end

myfunc(n, m) = myfunc(Float64, n, m)

function otherfunc(::Type{T}, n, m, o) where T<:AbstractFloat
  result::T = n+m+o
  return result
end

otherfunc(n, m, o) = otherfunc(Float32, n, m, o)

otherfunc(Float32, 5, 6.0f0, BigFloat(10))

where you declare your local variable to be of some type, which in turn calls the corresponding convert(T, value) function implicitly. This is more or less what happens when you write float x = 1.0. Obviously, this has also been pointed out for types by @yuyichao:

1 Like

That is roughly what the PR does and the PR is invalid. Though that PR doesn’t change the value types returned by rand so it’s not as bad.

There’s no such distinction.

Packages and user code are not required to be generic. The can very reasonably rely on rand() return a Float64.

For this task, I am using a patched version of julia which redefines
the floating point literals. The conversion of all literals is done at
the parser level and one can switch the default floating point
precision on the fly.

The base julia library has never been designed to handle this type
conversion but the fix is relatively non-invasive and very few changes
were needed to make the system work (e.g., base/irrationals.jl,
base/printf.jl). In my codes, I determine the default type by using
typeof(1.0) call, please take a look at adapgaus.jl and
legepols.jl. By rewriting rand, eye, etc. routines, one can make a
generic code that is fully reinterpreted in either BigFloat or Float32
as needed. Actually, I have been using this patch for tightening
quadratures tables that were build in Float64.

For julia 0.6.0 release, I put together my
changes into zg/promo branch which you can download here:

git clone http://github.com/zgimbutas/julia.git
cd julia
git checkout zg/promo
make

After the patched version of julia is installed (you might want to use
make CFLAGS=-Wno-error on Xcode 9 + MacOS High Sierra to handle a
libcurl compilation bug), please clone the user space tools from

git clone http://github.com/zgimbutas/default-promo.git

This package provides several scripts for switching between Float16,
Float32, Float64 and BigFloat literals: setenv_float16.jl,
setenv_float32.jl, setenv_float64.jl and setenv_bigfloat.jl.

For example:

include("setenv_bigfloat.jl")

println("\n===basic arithmetic==")

println("(1.0+1/3-2)*2=",(1.0+1/3-2)*2)
println("2/3=",2/3)
println("eps(1.0)=", eps(1.0))

println("\n===elementary functions==")

println("sin(1.0)=", sin(1.0))
println("sin(1)=", sin(1))
println("exp(1.0)=", exp(1.0))
println("exp(1)=", exp(1))

println("\n===constants and irrationals==")

println("4*atan(1.0)=", 4*atan(1.0))
println("pi=", pi)
println("2*pi=", 2*pi)
println("2.0*pi=", 2.0*pi)
println("pi-4*atan(1.0)=", pi-4*atan(1.0))

The output should look this:

===literals==
typeof(1.0)=BigFloat
typeof(1.0e0)=BigFloat

===basic arithmetic==
(1.0+1/3-2)*2=-1.333333333333333333333333333333333333333333333333333333333333333333333333333322
2/3=6.666666666666666666666666666666666666666666666666666666666666666666666666666695e-01
eps(1.0)=1.727233711018888925077270372560079914223200072887256277004740694033718360632485e-77

===elementary functions==
sin(1.0)=8.414709848078965066525023216302989996225630607983710656727517099919104043912398e-01
sin(1)=8.414709848078965066525023216302989996225630607983710656727517099919104043912398e-01
exp(1.0)=2.718281828459045235360287471352662497757247093699959574966967627724076630353555
exp(1)=2.718281828459045235360287471352662497757247093699959574966967627724076630353555

===constants and irrationals==
4*atan(1.0)=3.141592653589793238462643383279502884197169399375105820974944592307816406286198
pi=π = 3.1415926535897932384626433832795028841971693993751058209749...
2*pi=6.283185307179586476925286766559005768394338798750211641949889184615632812572396
2.0*pi=6.283185307179586476925286766559005768394338798750211641949889184615632812572396
pi-4*atan(1.0)=0.000000000000000000000000000000000000000000000000000000000000000000000000000000

or if include(“setenv_float32.jl”) is used

===literals==
typeof(1.0)=Float32
typeof(1.0e0)=Float32

===basic arithmetic==
(1.0+1/3-2)*2=-1.3333333
2/3=0.6666667
eps(1.0)=1.1920929e-7

===elementary functions==
sin(1.0)=0.84147096
sin(1)=0.84147096
exp(1.0)=2.7182817
exp(1)=2.7182817

===constants and irrationals==
4*atan(1.0)=3.1415927
pi=π = 3.14159...
2*pi=6.2831855
2.0*pi=6.2831855
pi-4*atan(1.0)=0.0

Some work is needed to debug the base library, the type conversion
will stress it in non-trivial way (for example, base/irrationals.jl
have build in Float64 conversions that I had to overload). For linear
algebra and FFTs, some generic functions are missing as well (svd,
eig, etc.). But I was able to do some non-trivial experiments this
way, and hope, it could be a good starting point.

More extensive tests are in test_float16.jl, test_float32.jl,
test_float64.jl, test_bigfloat.jl with corresponding results located
in *.output files.

Hope this helps.

Experiments with this (i.e. creating a fork) is fine. As long as it is made clear that (as I mentioend above) this is not at all supported, should not be used in remotely serious work and no attempt will be made to maintain compatibility with it.

Agree. This code is presented here just to demonstrate an idea, for experimentation purposes only. Please use it at your own risk.

The problem that Antoine describes is a very real one. I run into it all the time, and I’m sure so do many others. Writing generic code is a pain in the butt, and a major reason for me to switch from C++ to Julia was that Julia made it a lot easier to write generic code. In Julia, I can at least use data without referring to its type, and it is only when allocating it that I have to be careful about the type.

I agree with seemingly everyone else on this thread, however, that making the default floating point type interchangeable is a bad idea. I would have a hard time to give a conclusive reason as to why this is so, and - no offense - I believe most people on this thread would, yet everyone seems to agree. So maybe think of it as some sort of wisdom-out-of-experience thing that floats around in the programmer community. If that’s not good enough for you (frankly, it shouldn’t be), then you could also argue that now is not the right moment to introduce such a fundamental feature - not when Julia is this close to finally reaching version 1.0!

I believe the right way to go about would be to write a macro just like @ggggggggg proposed: a macro @defaulttype T code which takes a piece of code, looks for all occurrences of numeric literals, zeros, ones, and friends, and replaces them with the appropriately typed versions. This really imposes a minimal burden on the user: you simply wrap all your code into this macro (including the functions you wrote yourself). If you are anything like me, your code will most likely only consist of a single file anyway, so that’s simply adding a line @defaulttype T begin in the beginning, and an end in the end. On the other hand, it appropriately localises the effect and will not break any code which does expect rand to return a particular type and is not yours).

I think such a macro would definitely be handy for me, but probably not handy enough to go through the effort of implementing, testing and maintaining it. So if anyone else wants to have a go at it, please feel free and reassured that there would at least be one person out there who would be grateful for it.

(hi Simon!)

I’m not 100% sure it would be infeasible to just override everything, but you’re right, it’s not actually that big a deal to wrap everything in a macro (even with multiple files, it should not be that hard to figure a way to include the files and wrap them in a macro as well), and it would minimize side effects. I’ll try and do that!

1 Like

I played around with it, and it is pretty feasible to write a fairly capable macro that not only changes floating-point literals, but also handles functions like rand and ones, while preserving cases like rand(Float64) that explicitly specify a precision.

I’ve posted a little (unregistered) package that implements this: https://github.com/stevengj/ChangePrecision.jl

I would appreciate it if people banged on it a bit to see how well it works and whether I missed any important functions. I can register it if it seems useful.

(I don’t recursively handle include yet, but this would be pretty easy to add if it is needed.)

7 Likes

Another way to handle this would be to move all the methods in base that uses Float64 as a global default type in another module, the user could make the choice to exchange that module with another one. Right?

Wonderful! I took a stab at it yesterday, and @ettersi added to it: https://gist.github.com/antoine-levitt/413f3251e50c18f9544ae26ad5ea651f. It Your version seems much more robust though! The overloading of functions like sqrt() calling Base.sqrt() is a very nice trick. It does miss things like sin(1) though, and possibly irrationals (?). I’ll try and make a PR. But it would be very good to register it as a package I think!

Yes, I hardly have an exhaustive list of functions, but it is a one-line patch to add more, and PRs are welcome.

(At some point you have to stop and ask which transcendental functions are you actually likely to pass integers to? I’m doubtful about sin, but definitely sind would qualify.)

To handle irrationals robustly (so that pi*big(3) is still a BigFloat), the right thing is probably to define a new irrational type with slightly different promotion rules. That is a bit tedious to do without make Irrational <: AbstractIrrational by stevengj · Pull Request #24245 · JuliaLang/julia · GitHub, unfortunately.

1 Like

Cross-referencing a nice use for this: Performing arithmetic operations in quadruple precision - #14 by greg_plowman

Marking this topic as solved, see ANN: ChangePrecision.jl

Thank you very much @stevengj, this is perfect!