Generally, this is not how idiomatic Julia code should work. Hardcoding types in generic algorithms is discouraged, and the language goes to great lengths to help you avoid that. Some examples are:
most arithmetic operators being generic themselves: +, *, & friends figure out their type just fine without manual intervention
type calculations, eg promote_type & friends
generic one and zero that adapt to types, also see false and true
A well-written numerical Julia library may not actually include any constants with a concrete type. There are exceptions (precalculated tables for special functions, etc), but whenever possible, generic code is preferred.
I would add to the above that generic programming, while in some cases requiring some careful use of the tools mentioned by Tamas, allows not only changing the float type, but also propagating units (Unitful.jl), errors (Measurements.jl), and automatic differentiation, thus the benefits are much larger than simply a change in precision.
But also other cases that work with functions like ā2 or log(2).
Again, I agree that one should write agnostic code, but there are a lot of small insidious things that can very easily make you change precision without you noticing
My reply does not contain any code examples so I am not sure why you think I consider it simple.
But for the kind of code you are showing that includes constants, generally one would compute a float type from inputs and then promote the integers to that before taking logs or square roots etc, like in the examples above.
I changed that part since it appeared a bit too provocative. What I meant is that I know how you can write generic code, what I meant is that it is really easy to mess it than to make it right.
ChangePrecision is the one that more goes into that direction. I would still prefer something implemented in the compiler similar to how it is made in ifx and gfortran, but since it does not appear easy to make and that the community does not seem very interested into having it there, I did not want to push it more than that, for this reason I marked ChangePrecision as solution
I think that this is indeed an interesting feature request.
The issue here is that 1/2, 2*pi, sqrt(2), log(2),⦠all evaluate as Float64 and this is hard-coded inside julia (and in python, R, octave too).
Float64 works for many applications, but for some applications it is not ideal (e.g. neural networks).
Many Fortran compilers allow you to use Float32/64/128 via a compiler switch without cluttering your code.
In Julia, we have to promote manually as mentioned before:
T = Float32 # or BFloat16,...
1/T(2), T(2)*pi, sqrt(T(2)), log(T(2))
But realistically, I donāt think that such a hypothetical flag for julia would be used much (if it would exist), as it would probably trigger recompilation of all packages.
I think thereās a lot of talking past each other in this thread, so Iāll try to summarize.
Everyone agrees that generic code is the way to go. I think VinceNeede and I are just also saying
It needs to be easy to get right.
It should look nice.
The problem is that easy / nice-looking code (of which this thread has many examples by now) assumes Float64, so we just want an easy / nice-looking way to change that. Because unfortunately, even generic code really does need numeric literals some times ā and often very many of them.
A flag for all of Julia would be much coarser than desired (by some of us, at least). And we donāt expect Julia to magically do this for us. We recognize that we still have to understand our code and use our brains occasionally.
Nonetheless, based on input types to a function, weāll have some idea of the float type T that we do want to assume, and we just want a way to say in a single function or block of code
Here, I donāt want those standard operations to assume Float64; I want them to assume T.
I have a macro that does this (and is much less naive than the simplified version I wrote above for demonstration purposes). ChangePrecision has a macro that does this in a different way. And I think VinceNeede is suggesting that there could be some more low-level support.
Julia isnāt usually the language of being satisfied with laborious ways of doing things just to get them right.
Code assuming Float64 (for example, the oft-used float function ultimately uses hard-coded Float64 calls) are really a separate thing from parsing literals and require entirely separate approaches. For example, the expression 2Ļ isnāt a literal, the 2 is a literal and itās multiplied (*) with a variable Ļ with a value of type Irrational{:Ļ}. What you need is probably a mix of the two, and Iād say accepting the caveats of swapliterals! and @changeprecision is as easy as it gets if you donāt want to manually parse or convert input types and consciously exploit promotion.
The only āeasyā way to implicitly introduce an floating point output type is straightforwardly changing a global parameter that everything else opts to support. It wasnāt used before because a non-const global adds significant overhead to operations that canāt afford it, and only v1.12+ has the option of changing a const global to invalidate all the compiled code that ever uses it in method bodies (note: not values in method headers), which you wouldnāt want to do if you need to frequently change that flag or if you like precompile caches saving your time. Fast methods make assumptions, and not necessarily the ones you want; @changeprecision swaps out supported functions for calls in the macroās code input, but the new function still forwards to methods of the original function to take the macroās type input as an argument, which is only fast if the type input is constant.
That doesnāt make āeasy-lookingā things actually easy to implement or use. Someone who wants Array instantiation to make 0-based arrays instead for all their linear algebra could easily say the same thing.
Could you give a practical example from a real application? If itās lots of rational constants, you can just use Rational literals. If itās lots of irrationals like 2Ļ, you could use twoĻ from IrrationalConstants.jl or something more ambitious like IrrationalExpressions.jl. And you can also define your own Irrational constants.
(The other cases Iāve encountered that use lots of floating-point constants are usually things that are specific to a particular precision, like coefficients of polynomial approximations to a special function.)
Sure. Just a note: if you donāt think I should use ChangePrecision, I beg you ā for your own good ā not to look at the ungodly abominations that are my macros!
Hereās a relatively simple expression for the energy flux in gravitational waves from a pair of black holes in a non-eccentric orbit. Iāll soon have to incorporate expressions that include eccentricity, and are substantially more complicated, which is why Iām trying to put thought into my approach now. The structure of those expressions is important because people do actually look at them to compare them to the literature, so itās helpful to have them look basically like they do in the references.
The experimental impact of those (v/c)^12 terms is probably minimal, but the theoretical (not to mention code-validation) implications can be significant, so there are legitimate use cases where we want to evaluate these expressions with higher precision.
Those packages are two reasons I love working in Julia. Iāll definitely keep TypeDomainNaturalNumbers.jl in mind for other projects, but I donāt think either is quite what I need for this particular application.
As far as I can reconstruct, the desired feature is an option to change the parsing of floating point literals, such as -2.37. As usual, the devil is in the details.
Changing it globally (based on a compiler option, at startup) would break a lot of code that assumes that those literals are Float64. My guess would be that you would not get past loading Base to get a REPL without an error.
Changing in the context of loading a package or a single file could, theoretically, be achieved, but that seems to be insufficient for your purposes, and is also kind of icky because a global state would affect parsing.
I wonder if we are facing an XY problem: you want to do something, know a feature in Fortran that does it, and you want to do the same in Julia. But, thank God, Julia is not Fortran. You cannot just transplant a feature like that.
It would be great if you posted a self-contained MWE of what you are trying to do, then you could get better advice.
Some suggestions, maybe. First, actually that does not convert to Float64, the result will be determined by the type of the constant.
But, more generally, I would better not rely on global definition of the constants if this behavior is critical for, for instance, correctness. Rather, I would have something like:
@kwdef struct Constants{T}
ζ3::T = BigFloat(1.202056903159594285399738161511449990764986292)
end
Constants{T}() where {T} = Constants{T}(convert(T, ζ3))
function my_main_function(x::T; constants=Constants{T}()) where {T}
(; ζ3) = constants
return 27392ζ3 / 105
end
x = 1.0
my_main_function(x) # returns Float64
x = 1.0f0
my_main_function(x) # returns Float32
x = BigFloat(1.0)
my_main_function(x) # returns BigFloat
My Haskell is getting quite rusty and BigFloat is not native to Haskell (so the following requires installing the numbers package), but another (very silly) example might more clearly illustrate the differences with Julia:
ghci> import Data.Number.BigFloat
ghci> let f x = ((2*pi - x) + x) / 2
ghci> pi - f (0.1::BigFloat Prec50)
0.00000000000000000000000000000000000000000000000000e0
Since 2*pi is used in a BigFloat computation, it returns a BigFloat.
The equivalent Julia gives
julia> f(x) = ((2*pi - x) + x) / 2
f (generic function with 1 method)
julia> pi - f(BigFloat("0.1"))
1.224646799147353177226065932275001058209749445923078164062861980294536250318213e-16
This is an aspect of Haskell that I liked quite a bit, although there are some other less appealing things about numeric types in Haskell.
In the context of that quote, we were talking about ζ3 being defined as an Irrational, so it does actually convert to Float64.
Thatās a nice construction, but I donāt really see any advantage over using Base.@irrational to define const constants, then having a macro explicitly convert. I have to do this anyway for Ļ, so I just treat the other quantities like it.
I never used Base.@irrational, so I cannot really comment on that, but it seems to be an internal binding. An advantage, as usual for code that not depends on globals, is to be sure that the output depends only on the input values, that helps a lot in testing.
The concept of Irrational is to resolve the precision based on the other argument in a math operation. Unfortunately, since that is generally not possible when there is no other argument or it has no concept of precision (eg 2*Ļ, log(Ļ)). A relevant discussion was
What could be done instead if to store those operations and unroll them on demand.
Eg we could create an object
struct DIC{T,S} # Deferred Irrational Computation
op::T
args:S
end
and arrange methods so that
log(2*Ļ)
would be
DIC(log, (DIC(*, (2, Ļ)),))
and if you did an operator with an argument that has a concept of precision, eg
34.7 * log(2*Ļ)
it would take the precision from there and unroll recursively.
If the compiler can churn through all the constant folding, it could even be costless.