Change default precision

That is roughly what the PR does and the PR is invalid. Though that PR doesn’t change the value types returned by rand so it’s not as bad.

There’s no such distinction.

Packages and user code are not required to be generic. The can very reasonably rely on rand() return a Float64.

For this task, I am using a patched version of julia which redefines
the floating point literals. The conversion of all literals is done at
the parser level and one can switch the default floating point
precision on the fly.

The base julia library has never been designed to handle this type
conversion but the fix is relatively non-invasive and very few changes
were needed to make the system work (e.g., base/irrationals.jl,
base/printf.jl). In my codes, I determine the default type by using
typeof(1.0) call, please take a look at adapgaus.jl and
legepols.jl. By rewriting rand, eye, etc. routines, one can make a
generic code that is fully reinterpreted in either BigFloat or Float32
as needed. Actually, I have been using this patch for tightening
quadratures tables that were build in Float64.

For julia 0.6.0 release, I put together my
changes into zg/promo branch which you can download here:

git clone http://github.com/zgimbutas/julia.git
cd julia
git checkout zg/promo
make

After the patched version of julia is installed (you might want to use
make CFLAGS=-Wno-error on Xcode 9 + MacOS High Sierra to handle a
libcurl compilation bug), please clone the user space tools from

git clone http://github.com/zgimbutas/default-promo.git

This package provides several scripts for switching between Float16,
Float32, Float64 and BigFloat literals: setenv_float16.jl,
setenv_float32.jl, setenv_float64.jl and setenv_bigfloat.jl.

For example:

include("setenv_bigfloat.jl")

println("\n===basic arithmetic==")

println("(1.0+1/3-2)*2=",(1.0+1/3-2)*2)
println("2/3=",2/3)
println("eps(1.0)=", eps(1.0))

println("\n===elementary functions==")

println("sin(1.0)=", sin(1.0))
println("sin(1)=", sin(1))
println("exp(1.0)=", exp(1.0))
println("exp(1)=", exp(1))

println("\n===constants and irrationals==")

println("4*atan(1.0)=", 4*atan(1.0))
println("pi=", pi)
println("2*pi=", 2*pi)
println("2.0*pi=", 2.0*pi)
println("pi-4*atan(1.0)=", pi-4*atan(1.0))

The output should look this:

===literals==
typeof(1.0)=BigFloat
typeof(1.0e0)=BigFloat

===basic arithmetic==
(1.0+1/3-2)*2=-1.333333333333333333333333333333333333333333333333333333333333333333333333333322
2/3=6.666666666666666666666666666666666666666666666666666666666666666666666666666695e-01
eps(1.0)=1.727233711018888925077270372560079914223200072887256277004740694033718360632485e-77

===elementary functions==
sin(1.0)=8.414709848078965066525023216302989996225630607983710656727517099919104043912398e-01
sin(1)=8.414709848078965066525023216302989996225630607983710656727517099919104043912398e-01
exp(1.0)=2.718281828459045235360287471352662497757247093699959574966967627724076630353555
exp(1)=2.718281828459045235360287471352662497757247093699959574966967627724076630353555

===constants and irrationals==
4*atan(1.0)=3.141592653589793238462643383279502884197169399375105820974944592307816406286198
pi=π = 3.1415926535897932384626433832795028841971693993751058209749...
2*pi=6.283185307179586476925286766559005768394338798750211641949889184615632812572396
2.0*pi=6.283185307179586476925286766559005768394338798750211641949889184615632812572396
pi-4*atan(1.0)=0.000000000000000000000000000000000000000000000000000000000000000000000000000000

or if include(“setenv_float32.jl”) is used

===literals==
typeof(1.0)=Float32
typeof(1.0e0)=Float32

===basic arithmetic==
(1.0+1/3-2)*2=-1.3333333
2/3=0.6666667
eps(1.0)=1.1920929e-7

===elementary functions==
sin(1.0)=0.84147096
sin(1)=0.84147096
exp(1.0)=2.7182817
exp(1)=2.7182817

===constants and irrationals==
4*atan(1.0)=3.1415927
pi=π = 3.14159...
2*pi=6.2831855
2.0*pi=6.2831855
pi-4*atan(1.0)=0.0

Some work is needed to debug the base library, the type conversion
will stress it in non-trivial way (for example, base/irrationals.jl
have build in Float64 conversions that I had to overload). For linear
algebra and FFTs, some generic functions are missing as well (svd,
eig, etc.). But I was able to do some non-trivial experiments this
way, and hope, it could be a good starting point.

More extensive tests are in test_float16.jl, test_float32.jl,
test_float64.jl, test_bigfloat.jl with corresponding results located
in *.output files.

Hope this helps.

Experiments with this (i.e. creating a fork) is fine. As long as it is made clear that (as I mentioend above) this is not at all supported, should not be used in remotely serious work and no attempt will be made to maintain compatibility with it.

Agree. This code is presented here just to demonstrate an idea, for experimentation purposes only. Please use it at your own risk.

The problem that Antoine describes is a very real one. I run into it all the time, and I’m sure so do many others. Writing generic code is a pain in the butt, and a major reason for me to switch from C++ to Julia was that Julia made it a lot easier to write generic code. In Julia, I can at least use data without referring to its type, and it is only when allocating it that I have to be careful about the type.

I agree with seemingly everyone else on this thread, however, that making the default floating point type interchangeable is a bad idea. I would have a hard time to give a conclusive reason as to why this is so, and - no offense - I believe most people on this thread would, yet everyone seems to agree. So maybe think of it as some sort of wisdom-out-of-experience thing that floats around in the programmer community. If that’s not good enough for you (frankly, it shouldn’t be), then you could also argue that now is not the right moment to introduce such a fundamental feature - not when Julia is this close to finally reaching version 1.0!

I believe the right way to go about would be to write a macro just like @ggggggggg proposed: a macro @defaulttype T code which takes a piece of code, looks for all occurrences of numeric literals, zeros, ones, and friends, and replaces them with the appropriately typed versions. This really imposes a minimal burden on the user: you simply wrap all your code into this macro (including the functions you wrote yourself). If you are anything like me, your code will most likely only consist of a single file anyway, so that’s simply adding a line @defaulttype T begin in the beginning, and an end in the end. On the other hand, it appropriately localises the effect and will not break any code which does expect rand to return a particular type and is not yours).

I think such a macro would definitely be handy for me, but probably not handy enough to go through the effort of implementing, testing and maintaining it. So if anyone else wants to have a go at it, please feel free and reassured that there would at least be one person out there who would be grateful for it.

(hi Simon!)

I’m not 100% sure it would be infeasible to just override everything, but you’re right, it’s not actually that big a deal to wrap everything in a macro (even with multiple files, it should not be that hard to figure a way to include the files and wrap them in a macro as well), and it would minimize side effects. I’ll try and do that!

1 Like

I played around with it, and it is pretty feasible to write a fairly capable macro that not only changes floating-point literals, but also handles functions like rand and ones, while preserving cases like rand(Float64) that explicitly specify a precision.

I’ve posted a little (unregistered) package that implements this: https://github.com/stevengj/ChangePrecision.jl

I would appreciate it if people banged on it a bit to see how well it works and whether I missed any important functions. I can register it if it seems useful.

(I don’t recursively handle include yet, but this would be pretty easy to add if it is needed.)

7 Likes

Another way to handle this would be to move all the methods in base that uses Float64 as a global default type in another module, the user could make the choice to exchange that module with another one. Right?

Wonderful! I took a stab at it yesterday, and @ettersi added to it: https://gist.github.com/antoine-levitt/413f3251e50c18f9544ae26ad5ea651f. It Your version seems much more robust though! The overloading of functions like sqrt() calling Base.sqrt() is a very nice trick. It does miss things like sin(1) though, and possibly irrationals (?). I’ll try and make a PR. But it would be very good to register it as a package I think!

Yes, I hardly have an exhaustive list of functions, but it is a one-line patch to add more, and PRs are welcome.

(At some point you have to stop and ask which transcendental functions are you actually likely to pass integers to? I’m doubtful about sin, but definitely sind would qualify.)

To handle irrationals robustly (so that pi*big(3) is still a BigFloat), the right thing is probably to define a new irrational type with slightly different promotion rules. That is a bit tedious to do without make Irrational <: AbstractIrrational by stevengj · Pull Request #24245 · JuliaLang/julia · GitHub, unfortunately.

1 Like

Cross-referencing a nice use for this: Performing arithmetic operations in quadruple precision - #14 by greg_plowman

Marking this topic as solved, see ANN: ChangePrecision.jl

Thank you very much @stevengj, this is perfect!