CAS Best Practices

{@admin, if it makes good sense, please create a domain for all things Symbolically Maths+Logics (Symbolic Solutions? [your choice] ) and move this topic there - - thank you}

@rfateman @certik @carette What is your perspective on the relative benefit and concomitant cost to providing a low-level interworkable representation of floats, reals, and complex floats, complex reals, duals … , quaternions …? How much future welfare of the CAS to be is going to be resting on decisions of this nature? Any more advice on this current design question?

8 Likes

I think it would be ideal if there was a conference on “CAS past mistakes and best practices” where CAS developers & CAS researchers submit papers w/ detailed examples of mistakes to avoid & “best practices” to a conference volume. Then an ambitious team can create a new CAS using this book as a blueprint.

Julia developers aimed to create a language w/ the best features of the others:
speed of C/dynamism of Ruby/true macros like Lisp/mathematical notation like Matlab...

Similarly, developers can make a new CAS with:
integrals like RUBI/ODEs like Maple/...

I hope Symbolics.jl becomes that CAS.

@carette wrote a nice note on Quora:

The same reasoning applies to power as measure by functionality, especially from a sheer amount of it. Again, decades of development are hard to catch up. But not impossible! The Maple and Mathematica developpers had to invent a lot of stuff, some of which they got (spectacularly) wrong. So someone deeply steeped in all the lore of computer algebra and symbolic computation could avoid a lot of mistakes and get ‘there’ faster. It kind of depends on the depth of knowledge of the senior people on the SymPy team. If they got in touch with me, I could tell them quite a few stories of the classical mistakes done in all old CASes. Amusingly, I’ve seen a lot of them repeated in Sage!

3 Likes

I think you can try to structure some of this arithmetic-level handling in terms of modern algebraic notions (Rings, Fields, Algebraic extension), X {exact, approximate}. But I think it is doomed because you are faced with all of numerical computing and more. And people do not agree on so much of it. Different ways of handling floats, errors, traps, precision (variable), roundoff, … And then there are variants in interval arithmetic , forms like polar/rectanguar for complex, and you want even more.
Look at Axiom, and maybe Sage.

A limited repertoire of exact integer, rational, double-float and arbitrary-precision real is typical.
Shoe-horning in complex numbers as pairs of the above makes for substantial complexity.
For every binary operation like +, *, /, ^, and every pair of operand types, define a result type.
Now do it for sin, cos, …
Suggestion: find a way to punt to someone else’s world.

1 Like

I’d be more tempted to try to comment on such a question on Zulip than on discourse.

It’s a very complicated question. One huge problem is that floats are not “algebra” in any reasonable way. Never mind the non-associativity, the fact that none of the operations are congruences (they don’t respect the underlying equality) makes them largely “not math”. Not that numerical analysis isn’t extremely valuable (it is), but it is very hard to mix it with traditional math. It is, in my mind, its own thing, in a very strong sense.

It is usually saner to split things into two levels: the mathematical specification level (where traditional math applies), and the approximation level (where numerical analysis applies). It’s a one-way street. There is also “symbolic-numeric” stuff that makes sense, but it only does once you’ve really understood the huge gulf between math and NA.

I know that is a huge pill to swallow.

After that, it does make sense to discuss exactly what you mean by “interworkable representation of …”.

~WRD0001.jpg

6 Likes

Indeed Symbolics.jl avoids this by having extendable typed symbolics, so @variables x::Complex is something you can do today (and we just need to add nice rules to handle it correctly). And then of course, every quaternion, octonian, etc. package then just works inside of the system, you just have to add rules. Someone has already started to do a bunch of non-commutative algebras in quantum optics with Symbolics+MTK:

https://david-pl.github.io/Qumulants.jl/dev/tutorial/

This is one of the major differences of Symbolics: it’s starting from the position that real numbers, rationals, etc. are just important special cases but instead the real problem is about symbolic arithmetic of typed multiple dispatching values. So from month 1 there’s examples of non-commutative algebras on non-standard numbers being used in the ecosystem, which is exactly what we want to see.

That’s mixing up implementation and interface. A generic interface does not have to choose a single representation, it just needs to be clean, concise, and have a way to change representations. I think many of the CAS’s of the old made a very fundamental mistake of trying to be a full language instead of embedding within a host with generic dispatching mechanisms and adopting all of its standard semantics. REDUCE is probably the closest there, but it has the classic Lisp probably of not having a complete standard library, and so it builds out its own linear algebra routines, its own extended arithmetic, etc.

Of course, the most clear example of this of how to do this is probably DifferentialEquations.jl where one representation gives a few hundred different solvers with the ability to automatically swap arithmetic at will, and those representations are handled by the type system. So you can use Arb in DifferentialEquations.jl, you can use MPFR BigFloats, etc. all because of compiler JIT specializations it’s both high performance yet not feature limited.

The idea of that whole ecosystem is to not even attempt to top-down the feature set, and instead just define strong dispatchable interfaces with common solver sets and allow the whole community to then couple with it. The architecture is described in:

https://www.sciencedirect.com/science/article/abs/pii/S0965997818310251

That should then make clear the design decisions and directions of JuliaSymbolics. Clean interfaces, specializing performance with pure Julia parts where we know unique features can improve it (one of the biggest reasons seems to be the implicit parallelism and now E-graphs in MetaTheory), adding wrappers to both help complete the feature set and make it easier to benchmark pure Julia replacements, using the work of the community to not reinvent the wheel (use number types from other packages, use linear algebra from other packages, etc.), and writing clear documentation about how it all comes together.

I’d invite people to start looking at details. You’ll see that in this implementation style:

How much future welfare of the CAS to be is going to be resting on decisions of this nature?

There is zero cost because there is no preference on a number type. Quaternions.jl can be replaced with some new representation and generics in symbolics can carry over with little to no work. It’s built purposefully generic to allow these kinds of transitions. Action, interface, and representation do not have to be coupled.

12 Likes

I am impressed with your optimism, but I have substantial reservations about the approach.
Taking a truly simple case – say addition, the binary operation that we think we all understand.

Given a very capable type system [ I am familiar with the Common Lisp CLOS Object System]. I assume Julia is no more capable in the abstract.]
you could have a generic addition add(r,s) which looks at the properties of r and the properties of s and decides how to add them. You could even have an addition add(r,s,target) which looks at the properties of r,s,target, in the case of target:= add(r,s). You could even have add(r,s,target,rounding_modes, error_handler)… even so, would you, the system implementor, the user-package programmer, the end-user, or someone else in the chain of ownership, be OK with, say, specifying the result of adding a complex_rectangle of two polar-form complex numbers with components that are 50-digit arbitrary-precision decimal numbers, with a real-interval of two exact rational numbers?

Saying “clean interfaces” does not create them.

As one design point, which is not necessarily a recommendation for this choice, Macsyma/Maxima was implemented for a preference for symbolic hence “exact” computation. Therefore the first shot across the bow for dealing with the introduction of the floating-point numbers was to convert them to exact rationals. It might not be obvious, but a moment’s thought tells us that xxx*2^yyy is an exact rational. So this is a lossless transformation [ignoring IEEE infinities, signed zeros, not-a-numbers, which did not exist at the time].
So this resolved all floating-point questions by converting them to rational arithmetic questions.

Well, hah!
That was not very popular. There are still parts of Maxima that convert floats to rationals, and maybe willl convert a float f to a/b rationals that are conveniently more compact, such that |(a/b-f )/f| is not too big (your choice)

But if you want to do “everything” in one system ala Mathematica, you need to have a smooth transition between numbers and other numbers and symbols. Mathematica fails at this, making a hash of machine-floats and arbitrary-precision floats, and that is kind of foundational. Throw in intervals, infinities, … and ignore singularities mostly, and it gets sketchier.

Now it is not my contention that you can’t do symbolics+ numerics, or that object-oriented programming, just-in-time compilation, strong static typing, or Axiom categories, whatever you want … can’t be used usefully. It is, however, my belief that the choice of one (or more) of these techniques DOES NOT make “the solution” automatically fall out.

Indeed, most people leave a course in scientific computation woefully ignorant of numerical analysis, especially in the context of arbitrary-precision computation, error analysis, and, for that matter, symbolic computation.
Consider writing a Newton iteration to converge to a root of a polynomial P. You should know how to write this “mathematically” in one line but how could you economically find (say) an accurate 2000-bit apprioximation? How do you evaluate P, its derivative, the Newton step? In what precisions? How would you know your answer was good enough? Would you be helped by a type system? What role is played by a “clean interface”?

Sorry for rattling on at such length, but, you know, even 50 years ago, people thought about similar issues and tried to do the right thing. It is possible they didn’t get it right, but it wasn’t because they were (for the most part) stupid, or wanted, uh, “dirty interfaces” or were programming in languages that made programming impossibly hard, buggy, slow, … .

I have expressed my hope that building applications where the domain is limited and perhaps well-understood, can make possible the building and deployment of useful tools, perhaps using symbolic methods along the way. If these are built by devotees of Julia, that’s OK with me.

For background, I think I should mention that, in addition to being one of the grad students involved in implementing Macsyma at MIT (1966-…) I also was a member of the ANSI Common Lisp standardization committee, the IEEE floating-point standard committee, and the (recent) IEEE interval arithmetic committee.
RJF

11 Likes

Indeed, they were smart people doing the experiment for the first time without a retrospective. Being 10th and reading the literature helps. I’ll just point to DifferentialEquations.jl vs ODEPACK: what are your thoughts on what we did to that domain? Some of the changes here are similar in nature, relying on type generic programming with multiple dispatch with proofs of optimality and copious benchmarks all along the way against the existing packages. This combined with the fact that we know what the goal mostly looks like at the start (while other projects were being developed at the same time as many of the now core algorithms) is a pretty distinct advantage to exploit.

4 Likes

I am not particularly familiar with Hindmarsh’s ODEPACK, which presumably could be (maybe has??) been loaded into Maxima as foreign function (FORTRAN, in this case) code. I don’t know that there would be much advantage in doing that – maybe to set up a call? But it is basically a bunch of FORTRAN programs, as far as I know.

I don’t understand what you mean by optimality. Are you saying that you have the best solutions that can be produced as the results of running a stiff ode solver? (Are there non-optimal solutions, say less accurate???) or are you saying that you are provably choosing the particular method that will reach the correct solution faster than any other conceivable method and is hence optimal?

I would be more concerned that the results were only approximately correct because they were contaminated by instability / truncation error/ etc. Perhaps there is a separate operation of confirming the result?

But I don’t know about ODEPACK and even less about who might be using it for something. So you may be on to something very useful, but I’m not able to judge.
RJF

2 Likes

Just looked for a few minutes at the first example in
https://diffeq.sciml.ai/stable/tutorials/ode_example/#ode_example

The maxima command that produces the exact symbolic answer is
ode2 ( 'diff(u,t) = alpha*u , u, t);
of course it is hardly expected that real problems will be so easily solved in closed form, but that’s the interface. I did not look for an interface to ODEPACK for comparison to Maxima or diffeq…jl

I was surprised to see, in this first example, that the constant
1/2
is deduced to be of type Float64. To me(and to maxima or common lisp) it looks like an exact rational.

These comments may seem to be coming out of left field, and not really relevant to what you are trying to do, so feel free to ignore.
RJF

2 Likes

1//2 is a rational in Julia. We keep to the host’s syntax so the results are predictable. Special casing number semantics in a library would be fairly odd IMO (and is one of the difficulties with many symbolic languages without a multiple dispatching host IMO).

It’s all based on recompilation based on types so that generic algorithms hit optimized dispatches. Described in more detail in:

and also a bit in:

The latter of which in a type-generic form is precisely how specialization to using libraries like AbstractAlgebra is done silently.

2 Likes

Thanks for the clarification. If I understand you now, it is not optimal in any mathematical or numerical sense. You are saying that by observing some type information your compilation process will generate machine code that is more specifc to the particular case at hand. “Proofs of optimality” sounds like something else.
It is my recollection that Sage and maybe sympy have the same kind of problem with native-language numbers. Writing a python integer does not get you a Sage integer. (Or maybe this has been fixed?) Anyway, using Julia numbers like 1/2 when then do not mean Exact_Integer(1)/Exact_Integer(2) would seem to require that a symbolic version of Read-Eval-Print loop in Julia must have its own parser. Is that the case?

That’s a Python problem. Python has many problem of those lines, like the fact that NumPy even exists at all instead of just using the standard array type. The problem is that Python’s native types are pretty useless for most mathematical compute, so NumPy/TensorFlow/PyTorch/etc. all have their own scalar and array types, and that causes all sorts of weird incompatibilities. Adding to this issue is the fact that the interpreter doesn’t specialize on type information so the conversions can be costly.

Julia’s array and number types are actually usable because of the type-specialization on the parametric typing, so we exploit that and just use the wheel. Julia’s 1 is exact, and 2 is exact. 1/2 does a type promotion to a Float64, but the 1//2 version is the exact Rational{Int} version. That’s all type-generic via the type parameters, so big(1)//big(2) is the Rational{BigInt} version. Right now some of our stuff defaults to ::Real dispatches, but we have ways to allow @variables x::Rational{BigInt}. Rules are then specialized based on the number hierarchy. And then there’s a ton of packages which add alternative arithmetics which composes with, like @JeffreySarnoff’s ArbFloats.jl. By doing numbers like this, we then separate the implementation of numbers from the symbolic arithmetic and just rely on type inference.

The difficulty is just coaxing the dispatch mechanism to treat Sym{T} <: T so that tracing non-symbolic generic code gives behavior specific to the number type. This would be required so that @variables x::Complex then takes the general f(x) dispatch for complex numbers in Julia, which requires subtyping based on the type parameter. This is where overdubbing techniques come into play, i.e.:

and there’s a prototype:

That way, if there’s non-symbolic code which has different behavior for complex than non-complex, it’ll naturally trace through. Then, because the rules system already allows for ::T using dispatch for rule selection, all that’s left is to make type-specific rules (most are ::Real right now), and finally to do things like in solve_for(x::Term,Alg()) allow for specific dispatches where, for example, if all of the coefficients are Float64 just sending it to Flint (which is just the DiffEq dispatching architecture with type-based defaults handling).

6 Likes

I’m not quite clear what you mean by “a complex_rectangle of two polar-form complex numbers” and what it means to add that to a (single?) real interval. Maybe you could clarify that and then I will have a go at doing this.

3 Likes

If A and B are “intervals” either of the real line, or portions of the complex plane (which might be in the shape of a circle, a rectangle, or some other figure), then the result of adding them should be some “interval” object. C such that if
a is inside A
and
b is inside B
then a+b is inside C.

What is under discussion from the outset is how to describe C so that C is correct and convenient to work with. Like do you convert polar form to rectangular in order to make addition easier? Do you use binary floats or decimal floats or rational numbers? Do you store the corners or the center with ± “error bounds”? Or maybe you change it to a circle or some other figure?

My point here is that claiming that referring to types, inheritance, … is a solution just doesn’t cut it.

If it were the case that difficulties of this sort could be so easily “fixed,” then I expect that previous CAS would have solved them.

Also, I am curious to know what the syntax for symbolic stuff in Julia really looks like. I just learned that 1/2 is a Float64 but 1//2 is an exact rational. If x is a symbol, can I write 2*x? x/2? or is it x//2 ?
Historically, when a symbolic “addition” to a language that was initially designed for numerics, has been implemented, there are compromises. The languages I’ve seen include Algol 60 (Formula Algol, other attempts), Pascal ( ABC Pascal), Fortran(Formac, SAC-I, ALDES, ALTRAN), PL/I (also Formac, I think) , Python (Sympy).

One of the advantages of Lisp is that (until you worry about compilers and optimization), the language syntax is agnostic as to what operators should have special syntactic status. That is, (+ a b) and (* a b) have the same form as (some_function_I_will_write_tomorrow a b). Also, one core arithmetic is exact rational, helpful to build mathematics. You can use more constrained types and objects, and people do if they want fast floating-point code, but you don’t have it thrust in your face.
RJF

3 Likes

This is very true, but creating a culture of clean interfaces and inducting users into that culture over many years does. We would consider an add(r, s, target) method to be bad: the + function takes arguments, adds them together and returns the result. Having a method with a third argument that is a target destination violates the concept of the + generic function. And this isn’t just a name-based convention: the + function is an actual generic function object and you can look at all of its methods:

julia> methods(+)
# 190 methods for generic function "+":
[1] +(x::T, y::T) where T<:Union{Int128, Int16, Int32, Int64, Int8, UInt128, UInt16, UInt32, UInt64, UInt8} in Base at int.jl:87
[2] +(c::Union{UInt16, UInt32, UInt64, UInt8}, x::BigInt) in Base.GMP at gmp.jl:528
[3] +(c::Union{Int16, Int32, Int64, Int8}, x::BigInt) in Base.GMP at gmp.jl:534
[4] +(c::Union{UInt16, UInt32, UInt64, UInt8}, x::BigFloat) in Base.MPFR at mpfr.jl:376
[5] +(c::Union{Int16, Int32, Int64, Int8}, x::BigFloat) in Base.MPFR at mpfr.jl:384
[6] +(c::Union{Float16, Float32, Float64}, x::BigFloat) in Base.MPFR at mpfr.jl:392
[7] +(x::Union{Dates.CompoundPeriod, Dates.Period}) in Dates at /Users/stefan/dev/julia/usr/share/julia/stdlib/v1.7/Dates/src/periods.jl:372
[8] +(A::LinearAlgebra.Tridiagonal, B::LinearAlgebra.Tridiagonal) in LinearAlgebra at /Users/stefan/dev/julia/usr/share/julia/stdlib/v1.7/LinearAlgebra/src/tridiag.jl:734
[9] +(A::LinearAlgebra.Tridiagonal, B::LinearAlgebra.SymTridiagonal) in LinearAlgebra at /Users/stefan/dev/julia/usr/share/julia/stdlib/v1.7/LinearAlgebra/src/special.jl:157
[10] +(A::LinearAlgebra.Tridiagonal, B::LinearAlgebra.Diagonal) in LinearAlgebra at /Users/stefan/dev/julia/usr/share/julia/stdlib/v1.7/LinearAlgebra/src/special.jl:173
[11] +(A::LinearAlgebra.Tridiagonal, B::LinearAlgebra.Bidiagonal) in LinearAlgebra at /Users/stefan/dev/julia/usr/share/julia/stdlib/v1.7/LinearAlgebra/src/special.jl:193
[12] +(A::LinearAlgebra.Tridiagonal{var"#s814", V} where {var"#s814"<:Number, V<:AbstractVector{var"#s814"}}, B::LinearAlgebra.UniformScaling) in LinearAlgebra at /Users/stefan/dev/julia/usr/share/julia/stdlib/v1.7/LinearAlgebra/src/special.jl:226
[13] +(index1::CartesianIndex{N}, index2::CartesianIndex{N}) where N in Base.IteratorsMD at multidimensional.jl:114
[14] +(A::LinearAlgebra.UpperTriangular, B::LinearAlgebra.UpperTriangular) in LinearAlgebra at /Users/stefan/dev/julia/usr/share/julia/stdlib/v1.7/LinearAlgebra/src/triangular.jl:648
[15] +(A::LinearAlgebra.UpperTriangular, B::LinearAlgebra.UnitUpperTriangular) in LinearAlgebra at /Users/stefan/dev/julia/usr/share/julia/stdlib/v1.7/LinearAlgebra/src/triangular.jl:650
[16] +(A::LinearAlgebra.UpperTriangular, B::LinearAlgebra.Bidiagonal) in LinearAlgebra at /Users/stefan/dev/julia/usr/share/julia/stdlib/v1.7/LinearAlgebra/src/special.jl:86
[17] +(x::Base.TwicePrecision, y::Number) in Base at twiceprecision.jl:267
[18] +(x::Base.TwicePrecision{T}, y::Base.TwicePrecision{T}) where T in Base at twiceprecision.jl:273
[19] +(x::Base.TwicePrecision, y::Base.TwicePrecision) in Base at twiceprecision.jl:278
[20] +(F::LinearAlgebra.Hessenberg, J::LinearAlgebra.UniformScaling) in LinearAlgebra at /Users/stefan/dev/julia/usr/share/julia/stdlib/v1.7/LinearAlgebra/src/hessenberg.jl:559
[21] +(A::LinearAlgebra.LowerTriangular, B::LinearAlgebra.LowerTriangular) in LinearAlgebra at /Users/stefan/dev/julia/usr/share/julia/stdlib/v1.7/LinearAlgebra/src/triangular.jl:649
[22] +(A::LinearAlgebra.LowerTriangular, B::LinearAlgebra.UnitLowerTriangular) in LinearAlgebra at /Users/stefan/dev/julia/usr/share/julia/stdlib/v1.7/LinearAlgebra/src/triangular.jl:651
[23] +(A::LinearAlgebra.LowerTriangular, B::LinearAlgebra.Bidiagonal) in LinearAlgebra at /Users/stefan/dev/julia/usr/share/julia/stdlib/v1.7/LinearAlgebra/src/special.jl:86
[24] +(A::BitArray, B::BitArray) in Base at bitarray.jl:1127
[25] +(r1::StepRangeLen{T, R, S} where S, r2::StepRangeLen{T, R, S} where S) where {R<:Base.TwicePrecision, T} in Base at twiceprecision.jl:590
[26] +(r1::StepRangeLen{T, S, S1} where S1, r2::StepRangeLen{T, S, S1} where S1) where {T, S} in Base at range.jl:1280
[27] +(y::Dates.TimeType, x::StridedArray{var"#s814", N} where {var"#s814"<:Union{Dates.CompoundPeriod, Dates.Period}, N}) in Dates at /Users/stefan/dev/julia/usr/share/julia/stdlib/v1.7/Dates/src/deprecated.jl:18
[28] +(x::T, y::Integer) where T<:AbstractChar in Base at char.jl:235
[29] +(J::LinearAlgebra.UniformScaling) in LinearAlgebra at /Users/stefan/dev/julia/usr/share/julia/stdlib/v1.7/LinearAlgebra/src/uniformscaling.jl:150
[30] +(J::LinearAlgebra.UniformScaling, x::Number) in LinearAlgebra at /Users/stefan/dev/julia/usr/share/julia/stdlib/v1.7/LinearAlgebra/src/uniformscaling.jl:145
[31] +(J1::LinearAlgebra.UniformScaling, J2::LinearAlgebra.UniformScaling) in LinearAlgebra at /Users/stefan/dev/julia/usr/share/julia/stdlib/v1.7/LinearAlgebra/src/uniformscaling.jl:151
[32] +(J::LinearAlgebra.UniformScaling, B::BitMatrix) in LinearAlgebra at /Users/stefan/dev/julia/usr/share/julia/stdlib/v1.7/LinearAlgebra/src/uniformscaling.jl:153
[33] +(J::LinearAlgebra.UniformScaling, F::LinearAlgebra.Hessenberg) in LinearAlgebra at /Users/stefan/dev/julia/usr/share/julia/stdlib/v1.7/LinearAlgebra/src/hessenberg.jl:560
[34] +(A::LinearAlgebra.UniformScaling, B::LinearAlgebra.Tridiagonal{var"#s814", V} where {var"#s814"<:Number, V<:AbstractVector{var"#s814"}}) in LinearAlgebra at /Users/stefan/dev/julia/usr/share/julia/stdlib/v1.7/LinearAlgebra/src/special.jl:245
[35] +(A::LinearAlgebra.UniformScaling, B::LinearAlgebra.SymTridiagonal{var"#s814", V} where {var"#s814"<:Number, V<:AbstractVector{var"#s814"}}) in LinearAlgebra at /Users/stefan/dev/julia/usr/share/julia/stdlib/v1.7/LinearAlgebra/src/special.jl:250
[36] +(A::LinearAlgebra.UniformScaling, B::LinearAlgebra.Bidiagonal{var"#s814", V} where {var"#s814"<:Number, V<:AbstractVector{var"#s814"}}) in LinearAlgebra at /Users/stefan/dev/julia/usr/share/julia/stdlib/v1.7/LinearAlgebra/src/special.jl:255
[37] +(A::LinearAlgebra.UniformScaling, B::LinearAlgebra.Diagonal{var"#s814", V} where {var"#s814"<:Number, V<:AbstractVector{var"#s814"}}) in LinearAlgebra at /Users/stefan/dev/julia/usr/share/julia/stdlib/v1.7/LinearAlgebra/src/special.jl:260
[38] +(J::LinearAlgebra.UniformScaling, A::AbstractMatrix{T} where T) in LinearAlgebra at /Users/stefan/dev/julia/usr/share/julia/stdlib/v1.7/LinearAlgebra/src/uniformscaling.jl:154
[39] +(A::LinearAlgebra.Hermitian, B::LinearAlgebra.Hermitian) in LinearAlgebra at /Users/stefan/dev/julia/usr/share/julia/stdlib/v1.7/LinearAlgebra/src/symmetric.jl:469
[40] +(H::LinearAlgebra.Hermitian, D::LinearAlgebra.Diagonal{var"#s814", V} where {var"#s814"<:Real, V<:AbstractVector{var"#s814"}}) in LinearAlgebra at /Users/stefan/dev/julia/usr/share/julia/stdlib/v1.7/LinearAlgebra/src/diagonal.jl:169
[41] +(A::LinearAlgebra.Hermitian, J::LinearAlgebra.UniformScaling{var"#s814"} where var"#s814"<:Complex) in LinearAlgebra at /Users/stefan/dev/julia/usr/share/julia/stdlib/v1.7/LinearAlgebra/src/uniformscaling.jl:196
[42] +(A::LinearAlgebra.Hermitian{var"#s812", var"#s811"} where {var"#s812", var"#s811"<:(SparseArrays.AbstractSparseMatrix{Tv, Ti} where {Tv, Ti})}, B::SparseArrays.AbstractSparseMatrix{Tv, Ti} where {Tv, Ti}) in SparseArrays at /Users/stefan/dev/julia/usr/share/julia/stdlib/v1.7/SparseArrays/src/linalg.jl:15
[43] +(A::LinearAlgebra.Hermitian, B::SparseArrays.AbstractSparseMatrix{Tv, Ti} where {Tv, Ti}) in SparseArrays at /Users/stefan/dev/julia/usr/share/julia/stdlib/v1.7/SparseArrays/src/linalg.jl:18
[44] +(A::LinearAlgebra.Hermitian{var"#s802", var"#s801"} where {var"#s802", var"#s801"<:(SparseArrays.AbstractSparseMatrix{Tv, Ti} where {Tv, Ti})}, B::LinearAlgebra.Symmetric{var"#s800", var"#s799"} where {var"#s800"<:Real, var"#s799"<:(SparseArrays.AbstractSparseMatrix{Tv, Ti} where {Tv, Ti})}) in SparseArrays at /Users/stefan/dev/julia/usr/share/julia/stdlib/v1.7/SparseArrays/src/linalg.jl:26
[45] +(A::LinearAlgebra.Hermitian, B::LinearAlgebra.Symmetric{var"#s814", S} where {var"#s814"<:Real, S<:(AbstractMatrix{var"#s814"} where var"#s814"<:var"#s814")}) in LinearAlgebra at /Users/stefan/dev/julia/usr/share/julia/stdlib/v1.7/LinearAlgebra/src/symmetric.jl:483
[46] +(A::LinearAlgebra.Hermitian{var"#s810", var"#s809"} where {var"#s810", var"#s809"<:(SparseArrays.AbstractSparseMatrix{Tv, Ti} where {Tv, Ti})}, B::LinearAlgebra.Symmetric{var"#s808", var"#s807"} where {var"#s808", var"#s807"<:(SparseArrays.AbstractSparseMatrix{Tv, Ti} where {Tv, Ti})}) in SparseArrays at /Users/stefan/dev/julia/usr/share/julia/stdlib/v1.7/SparseArrays/src/linalg.jl:24
[47] +(r1::OrdinalRange, r2::OrdinalRange) in Base at range.jl:1257
[48] +(A::LinearAlgebra.UnitUpperTriangular, B::LinearAlgebra.UpperTriangular) in LinearAlgebra at /Users/stefan/dev/julia/usr/share/julia/stdlib/v1.7/LinearAlgebra/src/triangular.jl:652
[49] +(A::LinearAlgebra.UnitUpperTriangular, B::LinearAlgebra.UnitUpperTriangular) in LinearAlgebra at /Users/stefan/dev/julia/usr/share/julia/stdlib/v1.7/LinearAlgebra/src/triangular.jl:654
[50] +(UL::LinearAlgebra.UnitUpperTriangular, J::LinearAlgebra.UniformScaling) in LinearAlgebra at /Users/stefan/dev/julia/usr/share/julia/stdlib/v1.7/LinearAlgebra/src/uniformscaling.jl:182
[51] +(A::LinearAlgebra.UnitUpperTriangular, B::LinearAlgebra.Bidiagonal) in LinearAlgebra at /Users/stefan/dev/julia/usr/share/julia/stdlib/v1.7/LinearAlgebra/src/special.jl:86
[52] +(A::Array, Bs::Array...) in Base at arraymath.jl:43
[53] +(X::StridedArray{var"#s814", N} where {var"#s814"<:Union{Dates.CompoundPeriod, Dates.Period}, N}, Y::StridedArray{var"#s813", N} where {var"#s813"<:Union{Dates.CompoundPeriod, Dates.Period}, N}) in Dates at /Users/stefan/dev/julia/usr/share/julia/stdlib/v1.7/Dates/src/deprecated.jl:62
[54] +(A::Array, B::SparseArrays.AbstractSparseMatrixCSC) in SparseArrays at /Users/stefan/dev/julia/usr/share/julia/stdlib/v1.7/SparseArrays/src/sparsematrix.jl:1747
[55] +(x::StridedArray{var"#s814", N} where {var"#s814"<:Union{Dates.CompoundPeriod, Dates.Period}, N}) in Dates at /Users/stefan/dev/julia/usr/share/julia/stdlib/v1.7/Dates/src/deprecated.jl:55
[56] +(x::StridedArray{var"#s814", N} where {var"#s814"<:Union{Dates.CompoundPeriod, Dates.Period}, N}, y::Dates.TimeType) in Dates at /Users/stefan/dev/julia/usr/share/julia/stdlib/v1.7/Dates/src/deprecated.jl:10
[57] +(x::Rational, y::Integer) in Base at rational.jl:310
[58] +(r::AbstractRange{var"#s814"} where var"#s814"<:Dates.TimeType, x::Dates.Period) in Dates at /Users/stefan/dev/julia/usr/share/julia/stdlib/v1.7/Dates/src/ranges.jl:63
[59] +(A::LinearAlgebra.UpperHessenberg, B::LinearAlgebra.UpperHessenberg) in LinearAlgebra at /Users/stefan/dev/julia/usr/share/julia/stdlib/v1.7/LinearAlgebra/src/hessenberg.jl:101
[60] +(H::LinearAlgebra.UpperHessenberg, J::LinearAlgebra.UniformScaling) in LinearAlgebra at /Users/stefan/dev/julia/usr/share/julia/stdlib/v1.7/LinearAlgebra/src/hessenberg.jl:106
[61] +(A::LinearAlgebra.UnitLowerTriangular, B::LinearAlgebra.LowerTriangular) in LinearAlgebra at /Users/stefan/dev/julia/usr/share/julia/stdlib/v1.7/LinearAlgebra/src/triangular.jl:653
[62] +(A::LinearAlgebra.UnitLowerTriangular, B::LinearAlgebra.UnitLowerTriangular) in LinearAlgebra at /Users/stefan/dev/julia/usr/share/julia/stdlib/v1.7/LinearAlgebra/src/triangular.jl:655
[63] +(UL::LinearAlgebra.UnitLowerTriangular, J::LinearAlgebra.UniformScaling) in LinearAlgebra at /Users/stefan/dev/julia/usr/share/julia/stdlib/v1.7/LinearAlgebra/src/uniformscaling.jl:182
[64] +(A::LinearAlgebra.UnitLowerTriangular, B::LinearAlgebra.Bidiagonal) in LinearAlgebra at /Users/stefan/dev/julia/usr/share/julia/stdlib/v1.7/LinearAlgebra/src/special.jl:86
[65] +(x::Ptr, y::Integer) in Base at pointer.jl:159
[66] +(x::Dates.Instant) in Dates at /Users/stefan/dev/julia/usr/share/julia/stdlib/v1.7/Dates/src/arithmetic.jl:4
[67] +(x::P, y::P) where P<:Dates.Period in Dates at /Users/stefan/dev/julia/usr/share/julia/stdlib/v1.7/Dates/src/periods.jl:79
[68] +(x::Dates.Period, y::Dates.Period) in Dates at /Users/stefan/dev/julia/usr/share/julia/stdlib/v1.7/Dates/src/periods.jl:361
[69] +(y::Dates.Period, x::Dates.CompoundPeriod) in Dates at /Users/stefan/dev/julia/usr/share/julia/stdlib/v1.7/Dates/src/periods.jl:363
[70] +(y::Dates.Period, x::Dates.TimeType) in Dates at /Users/stefan/dev/julia/usr/share/julia/stdlib/v1.7/Dates/src/arithmetic.jl:85
[71] +(x::Dates.Period, r::AbstractRange{var"#s814"} where var"#s814"<:Dates.TimeType) in Dates at /Users/stefan/dev/julia/usr/share/julia/stdlib/v1.7/Dates/src/ranges.jl:62
[72] +(y::Union{Dates.CompoundPeriod, Dates.Period}, x::AbstractArray{var"#s814", N} where {var"#s814"<:Dates.TimeType, N}) in Dates at /Users/stefan/dev/julia/usr/share/julia/stdlib/v1.7/Dates/src/deprecated.jl:14
[73] +(A::LinearAlgebra.Symmetric, B::LinearAlgebra.Symmetric) in LinearAlgebra at /Users/stefan/dev/julia/usr/share/julia/stdlib/v1.7/LinearAlgebra/src/symmetric.jl:469
[74] +(S::LinearAlgebra.Symmetric, D::LinearAlgebra.Diagonal) in LinearAlgebra at /Users/stefan/dev/julia/usr/share/julia/stdlib/v1.7/LinearAlgebra/src/diagonal.jl:163
[75] +(A::LinearAlgebra.Symmetric{var"#s812", var"#s811"} where {var"#s812", var"#s811"<:(SparseArrays.AbstractSparseMatrix{Tv, Ti} where {Tv, Ti})}, B::SparseArrays.AbstractSparseMatrix{Tv, Ti} where {Tv, Ti}) in SparseArrays at /Users/stefan/dev/julia/usr/share/julia/stdlib/v1.7/SparseArrays/src/linalg.jl:15
[76] +(A::LinearAlgebra.Symmetric, B::SparseArrays.AbstractSparseMatrix{Tv, Ti} where {Tv, Ti}) in SparseArrays at /Users/stefan/dev/julia/usr/share/julia/stdlib/v1.7/SparseArrays/src/linalg.jl:18
[77] +(A::LinearAlgebra.Symmetric{var"#s806", var"#s805"} where {var"#s806"<:Real, var"#s805"<:(SparseArrays.AbstractSparseMatrix{Tv, Ti} where {Tv, Ti})}, B::LinearAlgebra.Hermitian{var"#s804", var"#s803"} where {var"#s804", var"#s803"<:(SparseArrays.AbstractSparseMatrix{Tv, Ti} where {Tv, Ti})}) in SparseArrays at /Users/stefan/dev/julia/usr/share/julia/stdlib/v1.7/SparseArrays/src/linalg.jl:25
[78] +(A::LinearAlgebra.Symmetric{var"#s813", S} where {var"#s813"<:Real, S<:(AbstractMatrix{var"#s814"} where var"#s814"<:var"#s813")}, B::LinearAlgebra.Hermitian) in LinearAlgebra at /Users/stefan/dev/julia/usr/share/julia/stdlib/v1.7/LinearAlgebra/src/symmetric.jl:484
[79] +(A::LinearAlgebra.Symmetric{var"#s814", var"#s813"} where {var"#s814", var"#s813"<:(SparseArrays.AbstractSparseMatrix{Tv, Ti} where {Tv, Ti})}, B::LinearAlgebra.Hermitian{var"#s812", var"#s811"} where {var"#s812", var"#s811"<:(SparseArrays.AbstractSparseMatrix{Tv, Ti} where {Tv, Ti})}) in SparseArrays at /Users/stefan/dev/julia/usr/share/julia/stdlib/v1.7/SparseArrays/src/linalg.jl:23
[80] +(y::AbstractFloat, x::Bool) in Base at bool.jl:102
[81] +(A::SparseArrays.AbstractSparseMatrix{Tv, Ti} where {Tv, Ti}, B::LinearAlgebra.Hermitian{var"#s814", var"#s813"} where {var"#s814", var"#s813"<:(SparseArrays.AbstractSparseMatrix{Tv, Ti} where {Tv, Ti})}) in SparseArrays at /Users/stefan/dev/julia/usr/share/julia/stdlib/v1.7/SparseArrays/src/linalg.jl:14
[82] +(A::SparseArrays.AbstractSparseMatrix{Tv, Ti} where {Tv, Ti}, B::LinearAlgebra.Hermitian) in SparseArrays at /Users/stefan/dev/julia/usr/share/julia/stdlib/v1.7/SparseArrays/src/linalg.jl:17
[83] +(A::SparseArrays.AbstractSparseMatrix{Tv, Ti} where {Tv, Ti}, B::LinearAlgebra.Symmetric{var"#s814", var"#s813"} where {var"#s814", var"#s813"<:(SparseArrays.AbstractSparseMatrix{Tv, Ti} where {Tv, Ti})}) in SparseArrays at /Users/stefan/dev/julia/usr/share/julia/stdlib/v1.7/SparseArrays/src/linalg.jl:14
[84] +(A::SparseArrays.AbstractSparseMatrix{Tv, Ti} where {Tv, Ti}, B::LinearAlgebra.Symmetric) in SparseArrays at /Users/stefan/dev/julia/usr/share/julia/stdlib/v1.7/SparseArrays/src/linalg.jl:17
[85] +(A::SparseArrays.AbstractSparseMatrixCSC, B::SparseArrays.AbstractSparseMatrixCSC) in SparseArrays at /Users/stefan/dev/julia/usr/share/julia/stdlib/v1.7/SparseArrays/src/sparsematrix.jl:1743
[86] +(x::SparseArrays.AbstractSparseVector{Tv, Ti} where {Tv, Ti}, y::SparseArrays.AbstractSparseVector{Tv, Ti} where {Tv, Ti}) in SparseArrays at /Users/stefan/dev/julia/usr/share/julia/stdlib/v1.7/SparseArrays/src/sparsevector.jl:1345
[87] +(A::SparseArrays.AbstractSparseMatrixCSC, B::Array) in SparseArrays at /Users/stefan/dev/julia/usr/share/julia/stdlib/v1.7/SparseArrays/src/sparsematrix.jl:1746
[88] +(A::SparseArrays.AbstractSparseMatrixCSC, J::LinearAlgebra.UniformScaling) in SparseArrays at /Users/stefan/dev/julia/usr/share/julia/stdlib/v1.7/SparseArrays/src/sparsematrix.jl:3799
[89] +(A::LinearAlgebra.AbstractTriangular, B::LinearAlgebra.AbstractTriangular) in LinearAlgebra at /Users/stefan/dev/julia/usr/share/julia/stdlib/v1.7/LinearAlgebra/src/triangular.jl:656
[90] +(x::Dates.AbstractTime, y::Missing) in Dates at /Users/stefan/dev/julia/usr/share/julia/stdlib/v1.7/Dates/src/arithmetic.jl:88
[91] +(A::LinearAlgebra.Bidiagonal, B::LinearAlgebra.Bidiagonal) in LinearAlgebra at /Users/stefan/dev/julia/usr/share/julia/stdlib/v1.7/LinearAlgebra/src/bidiag.jl:354
[92] +(A::LinearAlgebra.Bidiagonal, B::LinearAlgebra.UpperTriangular) in LinearAlgebra at /Users/stefan/dev/julia/usr/share/julia/stdlib/v1.7/LinearAlgebra/src/special.jl:94
[93] +(A::LinearAlgebra.Bidiagonal, B::LinearAlgebra.UnitUpperTriangular) in LinearAlgebra at /Users/stefan/dev/julia/usr/share/julia/stdlib/v1.7/LinearAlgebra/src/special.jl:94
[94] +(A::LinearAlgebra.Bidiagonal, B::LinearAlgebra.LowerTriangular) in LinearAlgebra at /Users/stefan/dev/julia/usr/share/julia/stdlib/v1.7/LinearAlgebra/src/special.jl:94
[95] +(A::LinearAlgebra.Bidiagonal, B::LinearAlgebra.UnitLowerTriangular) in LinearAlgebra at /Users/stefan/dev/julia/usr/share/julia/stdlib/v1.7/LinearAlgebra/src/special.jl:94
[96] +(A::LinearAlgebra.Bidiagonal, B::LinearAlgebra.Diagonal) in LinearAlgebra at /Users/stefan/dev/julia/usr/share/julia/stdlib/v1.7/LinearAlgebra/src/special.jl:115
[97] +(A::LinearAlgebra.Bidiagonal, B::LinearAlgebra.Tridiagonal) in LinearAlgebra at /Users/stefan/dev/julia/usr/share/julia/stdlib/v1.7/LinearAlgebra/src/special.jl:183
[98] +(A::LinearAlgebra.Bidiagonal, B::LinearAlgebra.SymTridiagonal) in LinearAlgebra at /Users/stefan/dev/julia/usr/share/julia/stdlib/v1.7/LinearAlgebra/src/special.jl:203
[99] +(A::LinearAlgebra.Bidiagonal{var"#s814", V} where {var"#s814"<:Number, V<:AbstractVector{var"#s814"}}, B::LinearAlgebra.UniformScaling) in LinearAlgebra at /Users/stefan/dev/julia/usr/share/julia/stdlib/v1.7/LinearAlgebra/src/special.jl:236
[100] +(x::AbstractArray{var"#s814", N} where {var"#s814"<:Dates.TimeType, N}, y::Union{Dates.CompoundPeriod, Dates.Period}) in Dates at /Users/stefan/dev/julia/usr/share/julia/stdlib/v1.7/Dates/src/deprecated.jl:6
[101] +(A::LinearAlgebra.SymTridiagonal, B::LinearAlgebra.SymTridiagonal) in LinearAlgebra at /Users/stefan/dev/julia/usr/share/julia/stdlib/v1.7/LinearAlgebra/src/tridiag.jl:205
[102] +(A::LinearAlgebra.SymTridiagonal, B::LinearAlgebra.Diagonal) in LinearAlgebra at /Users/stefan/dev/julia/usr/share/julia/stdlib/v1.7/LinearAlgebra/src/special.jl:145
[103] +(A::LinearAlgebra.SymTridiagonal, B::LinearAlgebra.Tridiagonal) in LinearAlgebra at /Users/stefan/dev/julia/usr/share/julia/stdlib/v1.7/LinearAlgebra/src/special.jl:159
[104] +(A::LinearAlgebra.SymTridiagonal, B::LinearAlgebra.Bidiagonal) in LinearAlgebra at /Users/stefan/dev/julia/usr/share/julia/stdlib/v1.7/LinearAlgebra/src/special.jl:213
[105] +(A::LinearAlgebra.SymTridiagonal{var"#s814", V} where {var"#s814"<:Number, V<:AbstractVector{var"#s814"}}, B::LinearAlgebra.UniformScaling) in LinearAlgebra at /Users/stefan/dev/julia/usr/share/julia/stdlib/v1.7/LinearAlgebra/src/special.jl:231
[106] +(x::AbstractIrrational, y::AbstractIrrational) in Base at irrationals.jl:156
[107] +(Da::LinearAlgebra.Diagonal, Db::LinearAlgebra.Diagonal) in LinearAlgebra at /Users/stefan/dev/julia/usr/share/julia/stdlib/v1.7/LinearAlgebra/src/diagonal.jl:156
[108] +(D::LinearAlgebra.Diagonal, S::LinearAlgebra.Symmetric) in LinearAlgebra at /Users/stefan/dev/julia/usr/share/julia/stdlib/v1.7/LinearAlgebra/src/diagonal.jl:160
[109] +(D::LinearAlgebra.Diagonal{var"#s814", V} where {var"#s814"<:Real, V<:AbstractVector{var"#s814"}}, H::LinearAlgebra.Hermitian) in LinearAlgebra at /Users/stefan/dev/julia/usr/share/julia/stdlib/v1.7/LinearAlgebra/src/diagonal.jl:166
[110] +(A::LinearAlgebra.Diagonal, B::LinearAlgebra.Bidiagonal) in LinearAlgebra at /Users/stefan/dev/julia/usr/share/julia/stdlib/v1.7/LinearAlgebra/src/special.jl:125
[111] +(A::LinearAlgebra.Diagonal, B::LinearAlgebra.SymTridiagonal) in LinearAlgebra at /Users/stefan/dev/julia/usr/share/julia/stdlib/v1.7/LinearAlgebra/src/special.jl:135
[112] +(A::LinearAlgebra.Diagonal, B::LinearAlgebra.Tridiagonal) in LinearAlgebra at /Users/stefan/dev/julia/usr/share/julia/stdlib/v1.7/LinearAlgebra/src/special.jl:163
[113] +(A::LinearAlgebra.Diagonal{var"#s814", V} where {var"#s814"<:Number, V<:AbstractVector{var"#s814"}}, B::LinearAlgebra.UniformScaling) in LinearAlgebra at /Users/stefan/dev/julia/usr/share/julia/stdlib/v1.7/LinearAlgebra/src/special.jl:241
[114] +(z::Complex, w::Complex) in Base at complex.jl:275
[115] +(r1::LinRange{T}, r2::LinRange{T}) where T in Base at range.jl:1264
[116] +(r1::Union{LinRange, OrdinalRange, StepRangeLen}, r2::Union{LinRange, OrdinalRange, StepRangeLen}) in Base at range.jl:1273
[117] +(A::AbstractArray, B::AbstractArray) in Base at arraymath.jl:37
[118] +(x::Dates.CompoundPeriod, y::Dates.Period) in Dates at /Users/stefan/dev/julia/usr/share/julia/stdlib/v1.7/Dates/src/periods.jl:362
[119] +(x::Dates.CompoundPeriod, y::Dates.CompoundPeriod) in Dates at /Users/stefan/dev/julia/usr/share/julia/stdlib/v1.7/Dates/src/periods.jl:364
[120] +(x::Dates.CompoundPeriod, y::Dates.TimeType) in Dates at /Users/stefan/dev/julia/usr/share/julia/stdlib/v1.7/Dates/src/periods.jl:392
[121] +(B::BitMatrix, J::LinearAlgebra.UniformScaling) in LinearAlgebra at /Users/stefan/dev/julia/usr/share/julia/stdlib/v1.7/LinearAlgebra/src/uniformscaling.jl:152
[122] +(A::AbstractMatrix{T} where T, J::LinearAlgebra.UniformScaling) in LinearAlgebra at /Users/stefan/dev/julia/usr/share/julia/stdlib/v1.7/LinearAlgebra/src/uniformscaling.jl:215
[123] +(x::AbstractArray{var"#s78", N} where {var"#s78"<:Number, N}) in Base at abstractarraymath.jl:97
[124] +(x::Rational{BigInt}, y::Rational{BigInt}) in Base.GMP.MPQ at gmp.jl:886
[125] +(x::Rational) in Base at rational.jl:267
[126] +(x::Rational, y::Rational) in Base at rational.jl:281
[127] +(dt::Dates.DateTime, y::Dates.Year) in Dates at /Users/stefan/dev/julia/usr/share/julia/stdlib/v1.7/Dates/src/arithmetic.jl:23
[128] +(dt::Dates.DateTime, z::Dates.Month) in Dates at /Users/stefan/dev/julia/usr/share/julia/stdlib/v1.7/Dates/src/arithmetic.jl:47
[129] +(x::Dates.DateTime, y::Dates.Quarter) in Dates at /Users/stefan/dev/julia/usr/share/julia/stdlib/v1.7/Dates/src/arithmetic.jl:75
[130] +(x::Dates.DateTime, y::Dates.Period) in Dates at /Users/stefan/dev/julia/usr/share/julia/stdlib/v1.7/Dates/src/arithmetic.jl:81
[131] +(t::Dates.Time, dt::Dates.Date) in Dates at /Users/stefan/dev/julia/usr/share/julia/stdlib/v1.7/Dates/src/arithmetic.jl:20
[132] +(x::Dates.Time, y::Dates.TimePeriod) in Dates at /Users/stefan/dev/julia/usr/share/julia/stdlib/v1.7/Dates/src/arithmetic.jl:83
[133] +(level::Base.CoreLogging.LogLevel, inc::Integer) in Base.CoreLogging at logging.jl:131
[134] +(x::Float64, y::Float64) in Base at float.jl:350
[135] +(a::Pkg.Resolve.FieldValue, b::Pkg.Resolve.FieldValue) in Pkg.Resolve at /Users/stefan/dev/julia/usr/share/julia/stdlib/v1.7/Pkg/src/Resolve/fieldvalues.jl:43
[136] +(z::Complex{Bool}, x::Bool) in Base at complex.jl:287
[137] +(z::Complex, x::Bool) in Base at complex.jl:294
[138] +(z::Complex{Bool}, x::Real) in Base at complex.jl:301
[139] +(z::Complex) in Base at complex.jl:273
[140] +(z::Complex, x::Real) in Base at complex.jl:313
[141] +(a::Pkg.Resolve.VersionWeight, b::Pkg.Resolve.VersionWeight) in Pkg.Resolve at /Users/stefan/dev/julia/usr/share/julia/stdlib/v1.7/Pkg/src/Resolve/versionweights.jl:22
[142] +(dt::Dates.Date, t::Dates.Time) in Dates at /Users/stefan/dev/julia/usr/share/julia/stdlib/v1.7/Dates/src/arithmetic.jl:19
[143] +(dt::Dates.Date, y::Dates.Year) in Dates at /Users/stefan/dev/julia/usr/share/julia/stdlib/v1.7/Dates/src/arithmetic.jl:27
[144] +(dt::Dates.Date, z::Dates.Month) in Dates at /Users/stefan/dev/julia/usr/share/julia/stdlib/v1.7/Dates/src/arithmetic.jl:54
[145] +(x::Dates.Date, y::Dates.Quarter) in Dates at /Users/stefan/dev/julia/usr/share/julia/stdlib/v1.7/Dates/src/arithmetic.jl:73
[146] +(x::Dates.Date, y::Dates.Week) in Dates at /Users/stefan/dev/julia/usr/share/julia/stdlib/v1.7/Dates/src/arithmetic.jl:77
[147] +(x::Dates.Date, y::Dates.Day) in Dates at /Users/stefan/dev/julia/usr/share/julia/stdlib/v1.7/Dates/src/arithmetic.jl:79
[148] +(x::Dates.TimeType) in Dates at /Users/stefan/dev/julia/usr/share/julia/stdlib/v1.7/Dates/src/arithmetic.jl:8
[149] +(a::Dates.TimeType, b::Dates.Period, c::Dates.Period) in Dates at /Users/stefan/dev/julia/usr/share/julia/stdlib/v1.7/Dates/src/periods.jl:383
[150] +(a::Dates.TimeType, b::Dates.Period, c::Dates.Period, d::Dates.Period...) in Dates at /Users/stefan/dev/julia/usr/share/julia/stdlib/v1.7/Dates/src/periods.jl:384
[151] +(x::Dates.TimeType, y::Dates.CompoundPeriod) in Dates at /Users/stefan/dev/julia/usr/share/julia/stdlib/v1.7/Dates/src/periods.jl:386
[152] +(x::BigFloat, y::BigFloat) in Base.MPFR at mpfr.jl:364
[153] +(x::BigFloat, c::Union{UInt16, UInt32, UInt64, UInt8}) in Base.MPFR at mpfr.jl:371
[154] +(x::BigFloat, c::Union{Int16, Int32, Int64, Int8}) in Base.MPFR at mpfr.jl:379
[155] +(x::BigFloat, c::Union{Float16, Float32, Float64}) in Base.MPFR at mpfr.jl:387
[156] +(x::BigFloat, c::BigInt) in Base.MPFR at mpfr.jl:395
[157] +(a::BigFloat, b::BigFloat, c::BigFloat) in Base.MPFR at mpfr.jl:536
[158] +(a::BigFloat, b::BigFloat, c::BigFloat, d::BigFloat) in Base.MPFR at mpfr.jl:542
[159] +(a::BigFloat, b::BigFloat, c::BigFloat, d::BigFloat, e::BigFloat) in Base.MPFR at mpfr.jl:549
[160] +(x::BigInt, y::BigInt) in Base.GMP at gmp.jl:479
[161] +(a::BigInt, b::BigInt, c::BigInt) in Base.GMP at gmp.jl:519
[162] +(a::BigInt, b::BigInt, c::BigInt, d::BigInt) in Base.GMP at gmp.jl:520
[163] +(a::BigInt, b::BigInt, c::BigInt, d::BigInt, e::BigInt) in Base.GMP at gmp.jl:521
[164] +(x::BigInt, c::Union{UInt16, UInt32, UInt64, UInt8}) in Base.GMP at gmp.jl:527
[165] +(x::BigInt, c::Union{Int16, Int32, Int64, Int8}) in Base.GMP at gmp.jl:533
[166] +(c::BigInt, x::BigFloat) in Base.MPFR at mpfr.jl:400
[167] +(x::Bool) in Base at bool.jl:89
[168] +(x::Integer, y::Ptr) in Base at pointer.jl:161
[169] +(y::Integer, x::Rational) in Base at rational.jl:317
[170] +(x::Integer, y::AbstractChar) in Base at char.jl:245
[171] +(x::Number, y::Base.TwicePrecision) in Base at twiceprecision.jl:271
[172] +(::Number, ::Missing) in Base at missing.jl:124
[173] +(x::Number, J::LinearAlgebra.UniformScaling) in LinearAlgebra at /Users/stefan/dev/julia/usr/share/julia/stdlib/v1.7/LinearAlgebra/src/uniformscaling.jl:146
[174] +(x::Bool, y::Bool) in Base at bool.jl:92
[175] +(a::Integer, b::Integer) in Base at int.jl:921
[176] +(x::Bool, y::T) where T<:AbstractFloat in Base at bool.jl:99
[177] +(x::Bool, z::Complex{Bool}) in Base at complex.jl:286
[178] +(x::Real, z::Complex{Bool}) in Base at complex.jl:300
[179] +(x::Bool, z::Complex) in Base at complex.jl:293
[180] +(x::Real, z::Complex) in Base at complex.jl:312
[181] +(x::Float16, y::Float16) in Base at float.jl:348
[182] +(::Missing) in Base at missing.jl:101
[183] +(::Missing, ::Missing) in Base at missing.jl:122
[184] +(::Missing, ::Number) in Base at missing.jl:123
[185] +(x::Missing, y::Dates.AbstractTime) in Dates at /Users/stefan/dev/julia/usr/share/julia/stdlib/v1.7/Dates/src/arithmetic.jl:89
[186] +(x::Float32, y::Float32) in Base at float.jl:349
[187] +(x::Number) in Base at operators.jl:572
[188] +(x::T, y::T) where T<:Number in Base at promotion.jl:395
[189] +(x::Number, y::Number) in Base at promotion.jl:320
[190] +(a, b, c, xs...) in Base at operators.jl:617

You’ll note that all 190 methods follow this behavior. This is not automatic, of course, which I believe is the point you’re making. But after Julia programmers have learned the language well, they internalize this idea that a generic function should implement a single concept and that while you can have as many methods implementing specializations of that concept, they should all be compatible and consistent — otherwise writing generic code becomes impossible. This cultural discipline is the basis for much of the composability that we see in Julia. Generic functions provide the technical capability but you’re quite right that if people just went around implementing methods in an incoherent way, then that wouldn’t help much. Fortunately, we’ve been indoctrinating people to create coherent interfaces for a while now to great effect.

15 Likes

Actually there are reasons to have add(a,b,target).
If a and b are large objects, you might want add(a,b,a) . Like C’s a+=b.

If you are multiplying a, b single-floats , you might quite reasonably hope to get more accurate bits in the answer by add(a,b,double_float_target).

I’ve taught about this in (undergrad) programming language compiling courses where inheritance in a tree representing intermediate code can explore upward as well as downward in the tree.
See Differences between Synthesized and Inherited Attributes - GeeksforGeeks

This is not an argument against “generic” programming – In lisp there is a (defgeneric …) interface to the object /message/ etc system. Just that it is inadequate to concisely express certain kinds of operations, and that, using your kind of justification in reverse, persons schooled in it may miss thinking of other kinds of solutions. That is, the syntax a+b boxes you in, in a way that is less apparent if the syntax is (+ a b &optional_arguments (round :even)) or (increment a b) or
(the double-float (* a b)) or … . Just kind of thinking out loud here, do what you wish with these thoughts.
RJF

Julia handles this with the .= “fusing” assignment operator, which can fuse with any number of operations and function calls, not just a single +, to avoid allocation of large temporary objects (e.g. arrays) and fuse multiple passes over arrays.

(We explicitly considered a specialized in-place += operation at some point, but decided we needed a more general mechanism.)

I think it would be a lot harder to reason about the precision of operations if the same a+b expression gave a different result depending on the context where it was used. Clearer to promote one of the operands explicitly if you want a more accurate result.

13 Likes

It doesn’t look clearer to me.

The nature of coercion is potentially mysterious. It may seem obvious to you that double-float "dominates’ single-float, but a user might want the opposite. That is, if you only need a few digits for an error estimate, you might want to take two doubles, a and b, and compute difference(a,b,single_float_target). Coercing either or both of a,b, to single first would be potentially bad – they might then be equal, even though a-b was not zero considered as double.
Computing the answer as a double and then converting wastes (some) time. Perhaps a,b are matrices, increasing the cost. Can all coercion issues be programmed around? Probably. Is it efficient and clear? Eh…

Potentially confusing (and this definitely can happen in CAS), you have non-obvious coercions like 1/4+0.5 which a user might suppose to be (exactly) 3/4 or 0.75. In Maxima one can arrange to get either, depending on whether you are working within the rational function representation or outside it. And what about 1/3+0.5?
RJF

1 Like

This specific case is actually covered in the manual: Conversion and Promotion · The Julia Language. The gist is that +(::Rational, ::AbstractFloat) will always result in an AbstractFloat. If we wanted to abuse the type system a bit, we could define

julia> Rational(f::F) where {F<:Function} = (args...,) -> mapreduce(Rational, f, args)
Rational

which would allow us to do things like

julia> +(1/4, 0.5)
0.75

julia> Rational(+)(1/4, 0.5)
3//4

…which doesn’t quite work for 1/3, since it can’t be exactly represented as a Float64, but close enough - we just need to pass 1/3 as a Rational 1//3.

julia> Rational(+)(1/3, 0.5)
15011998757901653//18014398509481984

julia> Rational(+)(1//3, 0.5)
5//6

julia> +(1/3, 0.5)
0.8333333333333333

Without any additional effort, this also gives us

julia> Rational(/)(0xbeef, 0xfeed)
0xbeef//0xfeed

julia> Rational(-)(3ℯ, 2π)
1053651010136835//562949953421312
7 Likes

This is already eminently doable, where the desired output type is passed as an argument. The specialized method might be function difference(a::Float64, b::Float64, T::Type{Float32}). And for matrices/arrays, it’s not necessary to write a new method, just call with broadcast: difference.(A, B, Float32).

@StefanKarpinski and @stevengj described how the present infix + works, and how coercion and promotion handle things. There’s no specific constraint against your proposed prefix-ternary add/difference (except it shouldn’t be called Base.+), and in fact multiple dispatch and broadcasting should make it relatively easy to implement. People like the infix 1 + 2 + 3 or equivalent +(1, 2, 3), but there’s no prohibition against packages that do it another way. That package in Julia might even be easy to compose with others, so automatic differentiation might even work immediately with your fast Float64->Float32 add.

2 Likes