Default Real (Float) kind?

This is somehow a follow-up to this question: Disable promotion from Float64 to BigFloat

The most used Fortran compilers offer the possibility to select which real kind should be used as default, i.e. the type (or better, the parsing rule) that should be attributed to floating point literals (for example from Intel compiler https://www.intel.com/content/www/us/en/docs/fortran-compiler/developer-guide-reference/2025-1/real-size.html).

Is there a reason on why something like this has not been added in Julia? Or simply it didn’t feel needed?

2 Likes

What would be the use case?

Can you give some examples of how are Fortran programmers using this in the wild?

This sounds like a very bad idea, because then the parser behavior would be less predictable. The cleanest way to do what you want would probably rely on Preferences.jl, but even that doesn’t seem nice. The idea might seem nice for interactive use, but it also seems like it’d make package development hell, suddenly package authors would have to start caring about this configuration option if they use floating-point literals.

Another issue with this approach is that any supported configuration would presumably have to be built into the parser, and thus into the sysimage, which goes contrary to the efforts of making the sysimage smaller.

1 Like

The main usage would be to have a uniformly representation of the float numbers used, and therefore the same precision. Right now in julia it is possible to use F32 literals by writing 1.3f0 but this would be hardcoded, meaning it would be difficult to change the program if I decide to change representation, and difficult to track (I could easily miss it somewhere), this method can be used if you have few literals in your code, but it becomes hard when you have many.

A worst case is if you’re working with BigFloat, then the initial parsing of literals to F64 would make you loose precision.

I’m not sure to understand why this should impact package development. The idea would be to set a type Float that would be an alias for the actually selected float type to use, similar to how Int is an alias for the Int type supported by the machine.

Unfortunately my knowledge is really limited here, if you can better explain this point or maybe provide references for it I’d gladly appreciate that

Taking a step back just in case, are you aware you can specify the desired output type to parse, e.g.:

julia> parse(BigFloat, "0.99999999999999999999999999999999999")
0.9999999999999999999999999999999999899999999999999999999999999999999999999999987

For the case of BigFloat, specifically, it’s also possible to call the constructor with a string argument:

julia> BigFloat("0.99999999999999999999999999999999999")
0.9999999999999999999999999999999999899999999999999999999999999999999999999999987

The question is, who sets that. If I’m a package author I want to be able to set that configuration so it would be specific to my package, without affecting any of my dependencies or dependents. TBH perhaps that’s even possible using Preferences.jl, not sure.

Julia code may be compiled, and this is usually desirable to do, with the goal of achieving better run time performance. To prevent bad latency due to having to compile the same code each time that Julia is started, precompilation is used, which basically just means the compiled native code and other things are stored to disk. This applies both to packages (“pkgimages”) and to the basic Julia installation (“sysimage”). Both are customizable using PackageCompiler.jl, so you might be able to learn more from the documentation there.

The issue with the sysimage is that it’s huge. One consequence is longer startup times for the Julia executable. Relevant issue:

Regarding this issue, it’s not really clear if excising BigFloat from the sysimage is possible now, without breaking compatibility. None the less, I don’t think the developers would like to possibly introduce even more dependence on MPFR (which implements BigFloat arithmetic) into the sysimage.

Sorry for being thick, but that just repeats the description of the feature, and does not give us a use case.

But in any case, if you are concerned about

you can introduce a function or a (read) macro that wraps and parses your literals, eg

macro n_str(ex)
    parse(Float64, ex)          # no safeguards, etc, just an example
end

n"1.0" # this week this is a Float64, next week could be BigInt, etc

That said, introducing such a large number of literals that are “precise” exactly in their decimal representation is probably a bit of a code smell.

It would be interesting to have some context for your problem. Maybe there is a better approach.

Suppose you have an algorithm to which you can estimate the precision of the result (and ODE solver where you can estimate the error from the integration step, or a variational eigen solver, or a time bounded problem where restricting to only F32 could boost your performance) and you only need 5 digits precision, so that you can only use F32. You than decide to improve the precision and you are going to need F64, you would have to remove all the f0 from you code risking to miss one and lower the precision. Furthermore you would not be able to act on your deps.

This would work well in your main but would not have effects on the deps.

Only the user should set that, not packages, which should use an agnostic type like Float, the way they already do with Int. The idea would be that a user could start julia session as

$ julia --float-size=4
# triggers pre compilation where all float literals are parsed as F32

Then if I recall

$ julia --float-size=4
# no pre compilation needed 

But instead

$ julia --float-size=8
# pre compile everything with F64

As far as I know (and I may be wrong here so please correct me if it is the case) this should not make bigger images file but rather create different ones. This should follow the same principle as other command line interfaces like --inline .

1 Like

The common way to handle this is to have a function take either a type or values of the type that you wish to compute with, then to cast any literals to that type.

If you want exact numbers in your code, then write them as such. Don’t write 0.3, write 3//10 and let it promote in your calculation. Or write T(3//10) or oftype(x, 3//10)). Sometimes you might need more complicated constants like sqrt(T(2)).

3 Likes

Nobody has mentioned ChangePrecision.jl yet. I think this does what you request, but locally within a block of code marked @changeprecision T begin ... end. It changes the parsing of 0.1, as well as the return type of 2pi and 3/4 etc:

julia> big(0.1)
0.1000000000000000055511151231257827021181583404541015625

julia> @changeprecision BigFloat begin 0.1 end
0.1000000000000000000000000000000000000000000000000000000000000000000000000000002
14 Likes

That is indeed interesting. Does not solve the dependencies problem, but actually one could argue that package developers should try to don’t force a specific float type.

Haskell has the nice feature that numeric literals are type matched by the compiler according to use, i.e.,

f x = 3 * x

f 4    -- same as 3 * 4
f 1.2  -- same as 3.0 * 1.2 

It is mainly convenient as Haskell always requires the same number type in its operations and does not promote them automatically.
I agree with @mikmoore that the proper way is to specify literal values such that they are precise. Then, they should stay precise when the are promoted to another type during computations. Note that for precisely this reason, some irrational constants are represented via a placeholder type that gets promoted as precise as needed:

julia> π |> typeof
Irrational{:Ď€}

julia> 1.0 * π
3.141592653589793

julia> big"1.0" * π
3.141592653589793238462643383279502884197169399375105820974944592307816406286198

The same example works in Julia.

f(x) = 3 * x
f(4) # same as 3 * 4
f(1.2) # same as 3.0 * 1.2

The works because 3 is represented exactly by an Int and the promotion is done at compile time.

Similarly, if you want to multiply by 0.1, you can do:

g(x) = (1//10) * x
g(0.3) # same as 0.1 * 0.3
g(0.3f0) # same as 0.1f0 * 0.3f0
g(big"0.3") # same as big"0.1" * big"0.3"

since the rational promotion is done at compile time.

5 Likes

Thanks for pointing out that package. I have a macro that does some of the same things, but probably not as well!

My use case is that I want to preserve accuracy while still being able to write nice-looking code. For example, a MWE is

function foo(x)
    x + 2Ď€
end

I might be using BigFloat or Double64 for the input x, but the output of this function will only ever be as accurate as a Float64 because 2Ď€ immediately converts to that precision. Even worse, if x has lower precision, the output will be a Float64 while only being as precise as x.

My macro essentially rewrites this function to

function foo(x::T) where T
    let π=T(π)
        x + 2Ď€
    end
end

I now see that ChangePrecision operates — probably more robustly — by changing the operators, rather than the constants. For example, 2π is rewritten to (ChangePrecision.*)(T, 2, π) which uses multiple dispatch to convert the arguments to T. I’m also always careful to write fractions as Rationals, which will preserve precision; ChangePrecision just converts a statement like 1/3 to (essentially) T(1)/T(3). Definitely thinking about converting to ChangePrecision.

As the author of ChangePrecision, I should caution you that it was intended for quick hacks and experiments, not for long-term usage. In the longer term, I tend to recommend writing your code more carefully to think about types and promotion.

Dealing with literals is the easy part of writing type-generic code, in my opinion. Mainly, you use things like rationals instead of decimal constants, and be a little careful with irrationals, e.g. write 2*(Ď€*x) rather than 2Ď€ * x, and there are a few cases where you need explicit casts.

The tricky part of type-generic code is dealing with type computations when the results are of different types than the inputs. For example, I would say that your example above

function foo(x::T) where T
    let π=T(π)
        x + 2Ď€
    end
end

is simply wrong — it will give an error for foo(3), even though 3 + 2π is perfectly meaningful. You would instead want something like

foo(x::Number) = x + 2 * oftype(float(x), π)

or alternatively if you don’t mind a couple of extra multiplications:

foo(x::Number) = (x/2 + π)*2

Also tricky are cases where you need to allocate types for containers, and need to compute the correct type.

In practice, you usually don’t get type-generic code completely correct the first time, especially if you aren’t used to it, but over time the code becomes more generic as you try it with more types (e.g. complex numbers, dual numbers, …).

5 Likes

Sure. That was a simplification of what I actually do, which also involves dealing with cases like dual or symbolic inputs; I just meant to give an idea of what’s happening.

Can I ask what’s wrong with using it longer term? To give a little more context to my use case, I’m copying very long expressions from several dozen sources in the literature, but they all just use simple arithmetic, and one important goal is to make the code actually look like the originals, because people want to compare them visually. From my tests so far, @code_native says that ChangePrecision and my most explicit hand-coded expressions compile down identical code.

1 Like

@changeprecision or any macro can’t recursively change functions called by your function calls (which is fine because that’d break so much code) except for include via its mapexpr metaprogramming feature, and it only changes some functions in Base, Random, Statistics, LinearAlgebra dating back almost 7 years. It might be fine for the long expressions of elementary arithmetic, but you don’t have to go much farther until you’ll have to handle types yourself.

julia> three() = 3.0; # not known by ChangePrecision

julia> typeof.(@changeprecision Float32 begin
           BigFloat(1.0), 2.0 * three(), 4.0+pi, 5+pi, sin(6)
       end)
(BigFloat, Float64, Float32, Float32, Float32)

An option that only strictly affects literals is swapliterals! from SafeREPL.jl. For comparison:

julia> swapliterals!(Float32, Int32, Int32, Int32)

julia> typeof.(begin
           BigFloat(1.0), 2.0 * three(), 4.0+pi, 5+pi, sin(6)
       end)
(BigFloat, Float64, Float32, Float64, Float64)

So it does fine with floating point literals (but can’t retroactively change previously evaluated literals like the 3.0 in three), but +(::Int32, ::Irrational{:π}) outputs Float64 anyway, so that’s why ChangePrecision.jl changes it.

1 Like

It also doesn’t change the fact that 0.1 is parsed directly into a Float64. In such simple cases it works (because string(0.1) reflects the number you wrote and wanted), but there are cases where this breaks. A good example is the exactly defined value of the Faraday Constant:

julia> đť‘­ = 96485.3321233100184
96485.33212331001

julia> đť‘­ = @changeprecision BigFloat 96485.3321233100184
96485.33212331000999999999999999999999999999999999999999999999999999999999999967

julia> big"96485.3321233100184"
96485.33212331001839999999999999999999999999999999999999999999999999999999999961
8 Likes

Right, all those facts are clear. But the only functions ever used in my enormous expressions are +, -, *, /, ^, log, and sqrt. The only literals are integers, e, pi, and eulergamma. And the only physical constants are the gravitational constant and the speed of light — which, because I work in a sensible field, are both integers (const G=c=1).

So all of these are handled appropriately by ChangePrecision. And in fact, I’ve just found that ChangePrecision does better than my approach of using Rationals when the inputs are Double64.

I recognize that mine is a pretty limited use case, but it still seems to me that I’d do well to use ChangePrecision in production.