This package allows you to effectively change the “default” precision in a large body of Julia code, simply by doing e.g. @changeprecision Float32 include("foo.jl") or, for example:
using ChangePrecision
@changeprecision BigFloat begin
x = pi/2
y = 2.1 * x * rand(3,3) \ linspace(0,1,3)
end
and all floating-point literals, as well as many functions like rand() or expressions like 3/2 will now default to the new type (here BigFloat) rather than Float64. Code that uses explicit types, like rand(Float64), will not be affected.
This is mainly meant for informal user “scripts” rather than library code, since library code should typically be written in a type-generic manner (inferring the desired precision from function arguments rather than hardcoding it).
As it says in the README, it doesn’t “look inside” functions that you call and rewrite them (there is no way for a macro to do this). The @changeprecision macro only transforms expressions that explicitly appear in the code you apply it to (or code inserted by include).
(The function calls that it transforms are only a specific set of Base functions that the macro knows about…and it’s not actually changing those functions, just calling a different method of them.)
Not to dismiss your work, but I am skeptical of the utility of packaging this (other than for demonstrating how powerful Julia is so that it allows you to do this). Having a registered package may just divert some inexperienced users from the right solution (writing generic code, yes, even for scripts).
So you suggest to write e.g. T(1.32) everywhere in a quick and dirty script with lots of numeric constants? And add tests that make sure you did not forget a T somewhere?
(In my scripts I just write 1.32 and only add the T when I need it. ChangePrecision seems handy.)
OTOH sometimes I really want Rational{BigInt}(132, 100) instead of 1.32 and thats seems impossible with a macro.
It is hard to say without the code, but if you have lots of numeric constants, you might want to use some container structure to organize them. See, for example, the implementation of Base.Math.JuliaLibm.log1p.
For some computational Physics applications, “high precision float” is necessary. I found BigFloat calculation is too slow, but DoubleFloats.jl works great on it.
So far, float constants are evaluated as Float64 values before additional processing.
For example, I always use x=df64"2.1" instead of x=Double64(2.1)
Between them, it exists an error of 8.88E-17.
So, there are many “df64” string in my code, seems not good.
Is it possible to make ChangePrecision.jl support Double64 type? I will aprreciate it very much!
That shouldn’t be more difficult than this (note: the tryparse implementation doesn’t handle errors correctly but is good enough for the purpose here):
julia> using ChangePrecision, DoubleFloats
julia> Base.tryparse(::Type{Double64}, s::AbstractString) = Double64(s)
julia> @changeprecision Double64 t = 2.1
2.1
julia> t == df64"2.1"
true
Actually it can, by converting the literal back to a string and parsing it again for the new precision.
Floating-point literals like 2.1 are handled by converting back to string and re-parsing. This works because even though 2.1 is initially parsed as Float64, the grisu algorithm that is used for printing floats outputs the shortest possible representation, so it will print 2.1 rather than 2.10000000000000008881784…. It’s not perfect in theory, because if you actually do enter 2.10000000000000008881784 as a literal constant, then it will get converted to 2.1, but this doesn’t seem to be a problem in practice — floating-point literals with 15+ digits are essentially never exact values.
DoubleFloats.jl now does implement tryparse. Double64s print only the higher order part of their value by default. You can see all of it in a few ways.