A principled approach to this, which has the desired capabilities but is also clearly bounded and not just an ad hoc, incomplete CAS, is Hans Boehm’s real number API: https://dl.acm.org/doi/10.1145/3385412.3386037. It was developed to spare Android users from floating-point nonsense when using the calculator app. There’s (of course) a Rust implementation whose readme has a concise description of the concept:
The Real type in this crate represents a number which is a composite of a rational (a ratio between two potentially large integers) and some real value which we may not be able to compute exactly and so must be approximated for display, but which sometimes has a precise description we can track to take advantage of in further calculations.
Implementing this in Julia would be a fun side project for anyone with time to spare! Even cooler would be if it could somehow be made to handle all literal expressions in a module without requiring special syntax or operators.
While this is a very cool approach and I am itching to read through the paper, I would actually prefer the “inexact” version. That is to say, once the precision is determined from the context, the computation should proceed as if constants were to be numbers of that precision.
Eg I would want
exp(-1im * π) * BigFloat(2.7)
to be equal to
exp(-1im * BigFloat(π)) * BigFloat(2.7)
(which is complex with a small imaginary part, because of numerical error) and not
In the real API, unless the system defines exp(-1im * π) as an exact symbolic constant, the RRA evaluation would at least give you -BigFloat(2.7) + im * BigFloat(0.0), because it can’t prove that the imaginary part is exactly zero; it can only evaluate the imaginary part and find that it is zero the requested precision. Still not identical to what float arithmetic gives, though.
It would be natural to provide different modes, from just a little magic to a lot of magic, that vary in how many function-of-rational-times-irrational expressions are considered exact symbolic constants vs. represented using RRA. But the premise is that when converting the final expression to a finite-precision number, the result is exact to the requested precision, so you would indeed get different results from a system that only does deferred promotion of irrationals.
That, but also the simple deferred calculation approach is orders of magnitude simpler. It is something we could have in the short run: dusting off IrrationalExpressions.jl or creating something similar is a relatively simple undertaking, but an RRA approach would take much longer I guess. (But if anyone wants to do it, that is great too).
It seems to be more CAS-like, and in the type domain, with all the related advantages and disadvantages. My understanding is that the motivation is different, but I have not read the whole source thoroughly.
That would work really well, especially if it could be done by the compiler as you mentioned.
just wanted to specify that the execution of the functions themselves should not be deferred, it would give a lot of overhead, but rather their compilation. But also this I think it is not trivial, imagine something like:
logpi() = log(pi)
How should it behave when called with
2.0*logpi()
And when called as
2.0f0*logpi()
Should the compiler understand that the function is called with different types and compile the function in the 2 format? Would that make the sysimages too big? Should only one type be compiled and give errors if it is called with a different type? Should recursion be completely disabled?
Another possibility could be, is there a way to change how irrationals are treated when alone? For example something (variable? Environment Variable?) that I can set that would work somehow like this:
This is why these ideas quickly turn into ad hoc, incomplete attempts at a CAS unless you’re careful. The real number API is an attempt to draw the line in a systematic and reasonable way (although it does require the judgment of where to cut off the exact symbolics and fall back to RRA, with Boehm’s approach ultimately informed by the operations available on a standard calculator (app)). Whether it’s actually a good approach to folding literal expressions, I’m not sure, and @Tamas_Papp provides good reasons against, but I’d love to see it tried out (or maybe trying it out myself at some point, but not right now).
Sure, everything’s in the type domain (as the package name says, too). Irrational is the same, as is IrrationalConstants.jl. However I’m not sure if it makes sense to compare my package to a CAS. It doesn’t rely on any of the CAS algorithms, like Groebner bases or whatnot, and the functionality is really much less than a CAS would provide. In fact, there’s no nontrivial algorithm in the package implementation, except perhaps for the continued fraction arithmetic (for rationals). The goal is basically to push the features as far as possible, generalizing Irrational and IrrationalConstants.jl, while letting the Julia type system do all the work.
I looked into that paper multiple times, each time being left more disappointed. The approach hardcodes a list of allowed analytic functions, and requires arbitrary-precision floating-point arithmetic (thus it requires a dependency external library written in C, in practice) to be able to try to compare real numbers. I don’t see anything principled in that. It’s nice for a calculator app, but I’m not seeing where else could Boehm’s real number API be useful.
To be clear, TypeDomainNaturalNumbers.jl relies BigFloat, too. However it’s just used for converting to a finite AbstractFloat representation, and for checking positivity/negativity, as far as I remember. So everything is exact. Basically, my approach is less ambitious (as far as they can be compared), but more principled.
If constant folding works, the execution would only be “conceptually” deferred whenever it is an argument in a function call where precision can be determined. In fact it would be determined at compile time.
In Julia it is generally not a good idea to defer compilation, and it isn’t easy either. The whole language is built around AOT compilation after all.
Perhaps you did not read the code sketch I wrote above? The proposed specs are pretty much like IrrationalExpressions.jl.
Precisely. That’s why I don’t want to do it at all
I think that for most applications of scientific computing (which is the comparative advantage of Julia), RRA has the wrong cost/benefit profile. Sure, it is cute for a calculator in Android, but even there I would just go with a Common-Lisp style promotion on demand to BigInt and plain vanilla 64-bit floats otherwise. From a practical perspective, this was a perfect example of nerdsniping.