Default Real (Float) kind?

Oh, I see where the misunderstanding is now.

Generally, this is not how idiomatic Julia code should work. Hardcoding types in generic algorithms is discouraged, and the language goes to great lengths to help you avoid that. Some examples are:

  1. most arithmetic operators being generic themselves: +, *, & friends figure out their type just fine without manual intervention
  2. type calculations, eg promote_type & friends
  3. generic one and zero that adapt to types, also see false and true

A well-written numerical Julia library may not actually include any constants with a concrete type. There are exceptions (precalculated tables for special functions, etc), but whenever possible, generic code is preferred.

I would add to the above that generic programming, while in some cases requiring some careful use of the tools mentioned by Tamas, allows not only changing the float type, but also propagating units (Unitful.jl), errors (Measurements.jl), and automatic differentiation, thus the benefits are much larger than simply a change in precision.

1 Like

I do agree that one should try to make general code, but to me it does not look really simple since it is way too easy to miss something.

julia> big"1." + 2Ļ€
7.28318530717958623199592693708837032318115234375
julia> big"1." + big"2"*Ļ€
7.283185307179586476925286766559005768394338798750211641949889184615632812572396

In the first case 2Ļ€ is evaluated as Float64 and only after it is converted to BigFloat. Things like that also happen with exponents:

julia> big"1." + 2^(3//2)
3.828427124746190290949243717477656900882720947265625

julia> big"1." + big"2"^(3//2)
3.828427124746190097603377448419396157139343750753896146353359475981464956924204

But also other cases that work with functions like √2 or log(2).

Again, I agree that one should write agnostic code, but there are a lot of small insidious things that can very easily make you change precision without you noticing

3 Likes

My reply does not contain any code examples so I am not sure why you think I consider it simple.

But for the kind of code you are showing that includes constants, generally one would compute a float type from inputs and then promote the integers to that before taking logs or square roots etc, like in the examples above.

I changed that part since it appeared a bit too provocative. What I meant is that I know how you can write generic code, what I meant is that it is really easy to mess it than to make it right.

ChangePrecision is the one that more goes into that direction. I would still prefer something implemented in the compiler similar to how it is made in ifx and gfortran, but since it does not appear easy to make and that the community does not seem very interested into having it there, I did not want to push it more than that, for this reason I marked ChangePrecision as solution

1 Like

I think that this is indeed an interesting feature request.

The issue here is that 1/2, 2*pi, sqrt(2), log(2),… all evaluate as Float64 and this is hard-coded inside julia (and in python, R, octave too).
Float64 works for many applications, but for some applications it is not ideal (e.g. neural networks).
Many Fortran compilers allow you to use Float32/64/128 via a compiler switch without cluttering your code.

In Julia, we have to promote manually as mentioned before:

T = Float32 # or BFloat16,...
1/T(2), T(2)*pi, sqrt(T(2)), log(T(2))

But realistically, I don’t think that such a hypothetical flag for julia would be used much (if it would exist), as it would probably trigger recompilation of all packages.

1 Like

I think there’s a lot of talking past each other in this thread, so I’ll try to summarize.

Everyone agrees that generic code is the way to go. I think VinceNeede and I are just also saying

  1. It needs to be easy to get right.
  2. It should look nice.

The problem is that easy / nice-looking code (of which this thread has many examples by now) assumes Float64, so we just want an easy / nice-looking way to change that. Because unfortunately, even generic code really does need numeric literals some times — and often very many of them.

A flag for all of Julia would be much coarser than desired (by some of us, at least). And we don’t expect Julia to magically do this for us. We recognize that we still have to understand our code and use our brains occasionally.

Nonetheless, based on input types to a function, we’ll have some idea of the float type T that we do want to assume, and we just want a way to say in a single function or block of code

Here, I don’t want those standard operations to assume Float64; I want them to assume T.

I have a macro that does this (and is much less naive than the simplified version I wrote above for demonstration purposes). ChangePrecision has a macro that does this in a different way. And I think VinceNeede is suggesting that there could be some more low-level support.

Julia isn’t usually the language of being satisfied with laborious ways of doing things just to get them right.

2 Likes

Code assuming Float64 (for example, the oft-used float function ultimately uses hard-coded Float64 calls) are really a separate thing from parsing literals and require entirely separate approaches. For example, the expression 2Ļ€ isn’t a literal, the 2 is a literal and it’s multiplied (*) with a variable Ļ€ with a value of type Irrational{:Ļ€}. What you need is probably a mix of the two, and I’d say accepting the caveats of swapliterals! and @changeprecision is as easy as it gets if you don’t want to manually parse or convert input types and consciously exploit promotion.

The only ā€œeasyā€ way to implicitly introduce an floating point output type is straightforwardly changing a global parameter that everything else opts to support. It wasn’t used before because a non-const global adds significant overhead to operations that can’t afford it, and only v1.12+ has the option of changing a const global to invalidate all the compiled code that ever uses it in method bodies (note: not values in method headers), which you wouldn’t want to do if you need to frequently change that flag or if you like precompile caches saving your time. Fast methods make assumptions, and not necessarily the ones you want; @changeprecision swaps out supported functions for calls in the macro’s code input, but the new function still forwards to methods of the original function to take the macro’s type input as an argument, which is only fast if the type input is constant.

That doesn’t make ā€œeasy-lookingā€ things actually easy to implement or use. Someone who wants Array instantiation to make 0-based arrays instead for all their linear algebra could easily say the same thing.

1 Like

Could you give a practical example from a real application? If it’s lots of rational constants, you can just use Rational literals. If it’s lots of irrationals like 2Ļ€, you could use twoĻ€ from IrrationalConstants.jl or something more ambitious like IrrationalExpressions.jl. And you can also define your own Irrational constants.

(The other cases I’ve encountered that use lots of floating-point constants are usually things that are specific to a particular precision, like coefficients of polynomial approximations to a special function.)

1 Like

There’s also my own TypeDomainNaturalNumbers.jl.

2 Likes

Sure. Just a note: if you don’t think I should use ChangePrecision, I beg you — for your own good — not to look at the ungodly abominations that are my macros!

Here’s a relatively simple expression for the energy flux in gravitational waves from a pair of black holes in a non-eccentric orbit. I’ll soon have to incorporate expressions that include eccentricity, and are substantially more complicated, which is why I’m trying to put thought into my approach now. The structure of those expressions is important because people do actually look at them to compare them to the literature, so it’s helpful to have them look basically like they do in the references.

The experimental impact of those (v/c)^12 terms is probably minimal, but the theoretical (not to mention code-validation) implications can be significant, so there are legitimate use cases where we want to evaluate these expressions with higher precision.

There is some of that, which is fine.

I do a lot of this, too. For example, I define ApĆ©ry’s constant ζ3. But even so, the term 27392ζ3 / 105 will convert to Float64 unless I do something more clever. And that’s the crux of the problem: inside this particular function, ζ3 needs to know what float type it’s supposed to behave like, depending on the type of the input argument to the function.

Those packages are two reasons I love working in Julia. :smile: I’ll definitely keep TypeDomainNaturalNumbers.jl in mind for other projects, but I don’t think either is quite what I need for this particular application.

1 Like

It is still not clear what ā€œitā€ is.

As far as I can reconstruct, the desired feature is an option to change the parsing of floating point literals, such as -2.37. As usual, the devil is in the details.

Changing it globally (based on a compiler option, at startup) would break a lot of code that assumes that those literals are Float64. My guess would be that you would not get past loading Base to get a REPL without an error.

Changing in the context of loading a package or a single file could, theoretically, be achieved, but that seems to be insufficient for your purposes, and is also kind of icky because a global state would affect parsing.

I wonder if we are facing an XY problem: you want to do something, know a feature in Fortran that does it, and you want to do the same in Julia. But, thank God, Julia is not Fortran. You cannot just transplant a feature like that.

It would be great if you posted a self-contained MWE of what you are trying to do, then you could get better advice.

4 Likes

Some suggestions, maybe. First, actually that does not convert to Float64, the result will be determined by the type of the constant.

But, more generally, I would better not rely on global definition of the constants if this behavior is critical for, for instance, correctness. Rather, I would have something like:

@kwdef struct Constants{T}
    ζ3::T = BigFloat(1.202056903159594285399738161511449990764986292)
end

Constants{T}() where {T} = Constants{T}(convert(T, ζ3))

function my_main_function(x::T; constants=Constants{T}()) where {T}
    (; ζ3) = constants
    return 27392ζ3 / 105
end

x = 1.0
my_main_function(x) # returns Float64

x = 1.0f0
my_main_function(x) # returns Float32

x = BigFloat(1.0)
my_main_function(x) # returns BigFloat
4 Likes

My Haskell is getting quite rusty and BigFloat is not native to Haskell (so the following requires installing the numbers package), but another (very silly) example might more clearly illustrate the differences with Julia:

ghci> import Data.Number.BigFloat
ghci> let f x = ((2*pi - x) + x) / 2
ghci> pi - f (0.1::BigFloat Prec50)
0.00000000000000000000000000000000000000000000000000e0

Since 2*pi is used in a BigFloat computation, it returns a BigFloat.

The equivalent Julia gives

julia> f(x) = ((2*pi - x) + x) / 2
f (generic function with 1 method)

julia> pi - f(BigFloat("0.1"))
1.224646799147353177226065932275001058209749445923078164062861980294536250318213e-16

This is an aspect of Haskell that I liked quite a bit, although there are some other less appealing things about numeric types in Haskell.

1 Like

In the context of that quote, we were talking about ζ3 being defined as an Irrational, so it does actually convert to Float64.

That’s a nice construction, but I don’t really see any advantage over using Base.@irrational to define const constants, then having a macro explicitly convert. I have to do this anyway for Ļ€, so I just treat the other quantities like it.

I never used Base.@irrational, so I cannot really comment on that, but it seems to be an internal binding. An advantage, as usual for code that not depends on globals, is to be sure that the output depends only on the input values, that helps a lot in testing.

Ah, I hadn’t seen the warning in its docs. Thanks for pointing that out. Looks like IrrationalConstants.@irrational might be a good substitute.

1 Like

Please consider this topic closed, we are all going a little to far and starting to be argumentative.

For now the answers that go in the direction of a possible solution are:

Please if you have other packages that you think can be useful quote this reply and add it, otherwise please consider this topic closed

Yes, I can see the issue.

The concept of Irrational is to resolve the precision based on the other argument in a math operation. Unfortunately, since that is generally not possible when there is no other argument or it has no concept of precision (eg 2*Ļ€, log(Ļ€)). A relevant discussion was

What could be done instead if to store those operations and unroll them on demand.

Eg we could create an object

struct DIC{T,S} # Deferred Irrational Computation
    op::T
    args:S
end

and arrange methods so that

log(2*Ļ€)

would be

DIC(log, (DIC(*, (2, π)),))

and if you did an operator with an argument that has a concept of precision, eg

34.7 * log(2*Ļ€)

it would take the precision from there and unroll recursively.

If the compiler can churn through all the constant folding, it could even be costless.

EDIT I just realized that this is what

does :wink:

And it looks to me like TypeDomainNaturalNumbers.jl can do that and more.