CAS Best Practices

@rfateman: I agree that there are some tricky questions, but the first thing to do is to make it easy to implement and experiment with.

Here is a first example of using a polar type with intervals. I believe this is an illustrative example of Julia’s flexibility.

First we define a generic, parametrised Polar type, and conversions to and from Julia’s standard Complex type:

struct Polar{T}
    r::T
    θ::T
end

Base.Complex(p::Polar) = p.r * cis(p.θ)

function Polar(z::Complex)
    x, y = reim(z)

    r = sqrt(x^2 + y^2)
    θ = atan(y, x)

    return Polar(r, θ)
end

We define + by converting to rectangular form, then converting back:


function Base.:+(p1::Polar, p2::Polar)
    z1 = Complex(p1)
    z2 = Complex(p2)

    return Polar(z1 + z2)
end

The above definitions and functions are generic – they will work with any type. (We may want to restrict them to e.g. accept only subtypes of Real, but for clarity I have not done so.)

Now we load the IntervalArithmetic.jl package, which represents intervals using endpoints (rather than midpoint + radius, for which we could use Arb via the Arblib.jl wrapper), and define some objects and add them

using IntervalArithmetic

p1 = Polar(1..2, 3..4)
p2 = Polar(3..4, 5..6)

p1 + p2

This produces an object of type Polar{Interval{Float64}};
Julia has automatically generated compiled, specialised code for the sum of two Polars containing Intervals. (Currently two-argument atan calls into MPFR for correct rounding, so it is rather slow.)

We can easily use intervals of BigFloats (i.e. MPFR floats with 256 bits of precision by default) instead:

p1 = Polar(big(1..2), big(3..4))
p2 = Polar(big(3..4), big(5..6))

p1 + p2

which produces an object of type Polar{Interval{BigFloat}} instead (and all necessary methods have again been specialised for the new type).

If at some point we came up with a better algorithm for the sum of two Polars containing Intervals, we could specialise just that method as

function Base.:+(p1::Polar{<:Interval}, p2::Polar{<:Interval}) 
   ...
end

In most of this code types are not mentioned, but nonetheless they just flow through the code, and the correct methods / versions of each function are called automatically. This is one of the places where Julia shines.

27 Likes

To say that Julia shines is praise without much context. Certainly the code you list could be trivially transcribed into ANSI Common Lisp (dating back to 1984, with earlier precursors) and though I am mostly unschooled in Python, I assume there are comparable features. Python, I gather, has the disadvantage of being slow (except Cython?), but Common Lisp implementations are generally distributed with compilers built in.

There is a bigger problem with your example, nice as it is. Almost a fallacy. There are several ways to look at it.

(1) You can declare success at a task by modifying the task to something you can succeed at doing. That is, deliberately or accidentally (or humorously) misconstrue the task as something easier.
Here’s an example, from Monty Python, How to Play the Flute

(2) Assuming that finding simple solutions to simple parts of a complicated problem will lead to simple solutions to the difficult parts. (In this case, disregarding the evidence that – historically at least – working on solving those complicated problems has involved many clever people writing hundreds of thousands of lines of code, using tools that were, all-in-all, not necessarily inferior in capability from what you have in Julia.) To be specific, if it could be done easily, why are these programs (Mathematica, Maple, Maxima, Axiom, …) .so large? Is it plausible to believe that they are large because they were not written in Julia? I suspect that for most large projects, typically done for Lisp system building, and certainly for most of the CAS in Lisp, C, C++, … – the first order of business is to build a language support that includes what is needed to make the programmers’ tasks simpler. This could be writing macro definitions, defining structures, setting up generic definitions, writing parsers, etc. So a claim that Julia is somehow superior for writing a CAS lacks the context of “compared to …”. Maple was written in Margay, an extension of C. Axiom was written in SPAD, implemented in Lisp. Maxima has a mixture of methods, but data-directed dispatch was there from the beginning.

My earlier example of interval arithmetic was just chosen to illustrate what happens when you mix too many numeric types and have to choose “the winner”. You haven’t necessarily resolved the winner, even in this example. Is the result an interval with complex polar or complex rectangular elements? Is the result stored in decimal format, or are the results converted to rational numbers?
If arbitrary precision and of varying length, do you choose the longest, the shortest, or perhaps (as done in Maxima) appeal to some global fp precision setting? [ why shortest?? maybe you think there is no point in carrying around extra bits if one of the inputs is already uncertain…]

This example is, however, drawn from a world view where the atomic objects are numbers, and it doesn’t deal with “symbols” – i.e. indeterminates m,n,x,y, or constants like pi, e. Take one of the building blocks that are used here. Do they work to convert (say)
r=3//4 theta= n*Pi//16 to rectangular form?

how about converting to polar form the complex number a+b*i where

a=(x+1)(x-1)-x^2+1
b = sin(n
Pi)

Yes, you were assuming explicit real values, but a CAS does not live on them alone.

Indeed, the value above for “a” is equivalent to the integer 0, but if you don’t have a program capable of recognizing that, your programming tools may not be so robust. That is, “if a==0 then r else s” doesn’t necessarily work.

Again, I’ve rattled on quite a bit; I hope you are taking this as food for thought. Certainly you are free to spend your time on learning about CAS and building a Julia CAS.
RJF

5 Likes

All of your examples here violate the core principle that I want to capture which is direct embedding into the host language. I think these all make a mistake by trying to be “a language” instead of a set of symbolic tools that work seamlessly with an imperative language. SymPy has seen massive adoption compared to these tools in the last decade because of this fact. However, SymPy is about 10,000x slower than it needs to be because of its Python base and Python itself does not have good numerical batteries, making SymPy not necessarily compose well with its ecosystem.

In contrast, with Julia we can directly embed within the language with full performance, pervasive multithreading, extendibility from the same language as the user, and composability with a whole package ecosystem. Just take a look at the first example of ModelingToolkit.jl’s paper:

Taking a code for DifferentialEquations.jl, analyzing it, differentiating the user’s 5th expression two times and then substituting in other variables to make a formulation of the pendulum that is more stable, and seamlessly giving back the code. This is done not by going to a different language, but just a single command available to the ODE solver.

I admire what previous CAS’s have done for CAS’s sake, but they don’t even attempt to solve this problem. and I think this is the fundamental problem to solve in the 21st century with computer algebra. The 20th century CAS’s like Maple, Mathematica, Maxima, Axiom, etc. solved how to make computer algebra work robustly and efficiently for real-world problems. But now, how can you integrate symbolic algebra directly into programming, mixing neural network based rewriting heuristics and automated code analysis directly into standard computing. Extensions to automatic differentiation should be as simple to write as “transform this code by calculating its Groebner basis and change the array size to the subspace”. I detail this vision and where we are at in a blog post:

There is a lot to admire in previous CAS work, but they don’t solve this problem, and I think it’s THE problem to solve.

13 Likes

I think there may be a bit of a disconnect in this discussion between traditional “CAS systems” like Mathematica and Macsyma, which, being entire self-contained systems, have to answer all the questions that anyone might have about how to do any symbolic or numeric computation that can be expressed. Since they are entire programming systems, they have to be huge and complete. At the same time, they have to be convenient and intuitive enough that mathematicians will want to use them. Both of those are massive burdens and really hard to satisfy.

Symbolics.jl isn’t trying to do any of this — it’s not that kind of CAS. It’s not a self-contained system or even a language. It’s a toolbox for doing symbolic reasoning about Julia ASTs in the context of Julia’s type system. It doesn’t need to create any of the language level stuff — that already exists. It doesn’t need to invent a type system — that also already exists. It doesn’t need to provide definitive answers to all the possible ways values can be represented or computations can be done — the language already lets people do that. In fact, trying to do this would be too limited because one of the strengths of the language is letting people build their own new ways to represent things and express how to do computations with those representations. If Symbolics tried to provide all the answers, it would be too limited to be useful for its intended purpose. Instead, it aims to give people who need to do symbolic reasoning about Julia code the tools to do it in a flexible way. When there are questions about which way to do things, it shouldn’t pick an answer, it should provide a way to let the user choose.

So, for example, Symbolics.jl doesn’t need to decide for the user which representation of polar intervals is better. They will already have to have done that in their code that works with polar intervals. There’s more than one way to do those computations, and the language’s job is not to pick the best one for you, it’s to let you easily express the approach you want and to switch easily between different approaches. I realize that’s a philosophical stance that’s pretty different from traditional CAS systems, which generally try to automagically figure out a good way to do your computation, but Julia really avoids magic of any kind — everything should instead be explicit but easy to express.

Similarly I don’t really understand the issue with a being equivalent to 0. If you’ve provided a set of symbolic rules that allows Symbolics to simplify that expression to zero, then it will get simplified; if you haven’t, then it won’t. There’s no one true answer — it depends on the set of rules you’re operating under.

I think this difference in philosophy and scope is what makes the project interesting and worth pursuing. If this were just an attempt to build another end-user CAS system, I agree, why bother? We already have several good ones. But that’s not the endeavor here at all. The scope is limited to symbolic reasoning about Julia ASTs within the context of Julia’s type system. Specifically because that turns out to be really incredibly useful when doing numerical computations. Philosophically, rather than trying to be complete and automatic, this is limited and completely explicit (but with an easy way to get a reasonable usable set of defaults), because it’s not even meant to be used by end-users, it’s meant to be used by library authors. For comparison, the project I’m aware of that seems most similar in spirit is GiNaC, a C++ library for embedding efficient symbolic computation in C++ programs, which is explicitly not a CAS (GiNaC = “GiNaC is Not a CAS”).

This doesn’t mean there aren’t big challenges here. I just got off a (Julia Computing) compiler team meeting where we spent a bunch of time discussing how to meet the needs of Symbolics, which are not simple. The folks on the Symbolics side (Chris & Shashi) came in with an ask about changing Julia’s inheritance system in a pretty fundamental way, which all the compiler side folks pushed back on, but we counter-proposed something similar to Cassette, which can take existing code and recursively rewrite it, replacing method calls with other method calls. In the end, we’ll figure out something that works for Symbolics without being too specific to it, possibly a mechanism that can be used by both AD systems and for symbolic computations.

23 Likes

I guess we fundamentally disagree on what the core problem is. I think that incorporating (much) mathematical knowledge in an algorithmic context, in a practical, accessible framework is the core problem.
The thought that all of mathematics would fall at your feet if you only had the right programming language has been pursued unsuccessfully for many years, going back to IBM’s Scratchpad, and even continuing to this day where Wolfram seems to think that his Language should be used for everything. I think his language is hard to read/write/understand, and is of little intrinsic interest if it didn’t have interesting symbolic / graphical / etc commands as part of the system.

As far as being a natural language for direct embedding, Julia seems to fail your core principle even on the simple example of 3/4. I am unfamiliar with other details, but I suppose you could argue on what “natural” means.

If Python and SymPy were to be made 10,000x faster, and given better numerical libraries, would Julia symbolics be unnecessary? what about 10x?

I looked at the paper to which you refer. There are far too many references to systems I am not familiar with, and my very limited acquaintance with DAE (differential algebraic equations) is many decades old.
I can say that the idea of generating numeric code from a CAS has been visited repeatedly (Google for Gentran Maxima), with the option of generating FORTRAN or C. It is also possible (without Gentran) to develop numerical code to be compiled in Lisp, but for people who have a huge investment in C or Fortran “solvers”, this Lisp mode is not appealing. Also (although not inevitably) the compiled Lisp numeric code is relatively naive with respect to multiprocessing or GPUs, and
so may be slower than (serial C) by a factor than depends on the quality of the compilers involved.

So it would seem that the big deal is that your version of Gentran-in-Julia can generate numeric-code-in-Julia rather than C or FORTRAN. (or, I suppose in Python). It is that latter prospect which gives you leave to say 10,000x faster — Python fans are generating long-running purely numerical code in an interpretive system. (could they use Cython and fix all of that?)

It seems that the right comparison for symbolics in Julia is to other packages in the business of modeling and differential equation solving, maybe electrical/mechanical computer-aided design. Not really what I study, though apparently of some significant interest. To compare it to a monster like Mathematica doesn’t make so much sense (although Mathematica fans may stray into some Modelica-in-Mathematica territory, I would not expect that to be effectively implemented totally in the “Wolfram Language”. ) If symbolics in Julia can just represent expressions on their way to being compiled for numerical evaluation, and has only rudimental facilities in core techniques (like simplification), it might be ok for your use. I would, however, suggest a look at gentran-like activities.

Good luck.
RJF

2 Likes

This idea strikes me as obviously silly — if that’s what it seems like we’re saying, then we definitely haven’t communicated clearly enough. A good language doesn’t give you mathematics for free, it gets out of your way and lets you focus on the actual problem while providing you the support you need to do that. Languages often fail at this and programmers end up spending more time wrestling with the language than solving real problems.

9 Likes

I think the point here is that going back over the last 50 or 60 years people have worked on CAS, and in the process have explored many paradigms of programming.
They have implemented various ideas including “one language for system builders and users” or different languages at different levels. They have explored expressiveness based on methods, functions, rules, algol-ish control structures, point-and-click menus, etc.

I suppose I would point to Axiom as the most highly developed in the mathematical/ algebraical/ category stream of thought. Probably Mathematica in the pattern-replacement rule-based paradigm.
I don’t know what would be the best exemplar of user-interface-menu-clicking.

Julia does not seem to offer any particular language idea that has not been used previously. So why do a CAS again? Frankly claims that it is 10,000x faster than some system S is pretty likely evidence that system S is broken, ill-suited to the benchmark, or being misused.

Its an interesting claim that if you use Jupiter then you can focus on the actual problem, something you could not do (conveniently?) in any previous CAS. I have not yet seen any evidence of this, i.e. comparing Maxima to Julia. comparing Mathematica to Julia. comparing Axiom to Julia.
RJF

2 Likes

Well, I guess that’s where we disagree. But I’m obviously biased and it’s understandable that to someone looking in from the outside it appears like “just another language”. I’m loathe to link to my https://www.youtube.com/watch?v=kc9HwsxE1OY video yet again, but it’s still the best explanation around for why something new and different is going on here. The evidence is an ecosystem where program composition and code reuse actually works and happens on a regular basis. That doesn’t have anything specifically to do with building a CAS, but it’s why so much powerful software can be built with relatively little time and effort: ubiquitous, fast multiple dispatch allows totally independent software components to be composed into programs that do more than the sum of their parts. Yes, Common Lisp has opt-in multiple dispatch. And no, that’s not good enough. It’s not transformative unless it’s ubiquitous and in Common Lisp it’s relatively slow and opt-in, both of which stop it from being ubiquitous. It also doesn’t work with CLOS parametric types, which means that you can’t use it where you especially need it for numerical work.

In any case, I want to thank you for your thoughts on this. I think the only way we’re going to see how this plays out is to try it and see what happens.

24 Likes

I don’t know what you mean by “opt-in” multiple dispatch in Common Lisp. You can define a method to dispatch on any number of arguments, not necessarily all. That’s a bad thing? And it’s relatively slow? Compared to? You have benchmarks? Which Lisp implementation? I think I’ll let you work on playing this out without me. Cheers
RJF

I am trying to follow this conversation, but I don’t think I grasped your point yet. Are you saying that what is being attempted is pointless, because if it could be done it would have been done already? So, since it couldn’t be done there’s no point in trying?

6 Likes

I believe what he’s talking about is that defgeneric is not considered to be the default, defun is. Furthermore, defgeneric sees comparatively less use in the ecosystem because it imposes a runtime penalty relative to defun (in every implementation that I’m aware of at least). For example, addition and multiplication are not generic functions is most (all?) common lisp implementations for this very reason.

The fact that people who care about runtime performance will think twice about writing generic functions means that there are significantly less opportunities to add methods to other people’s code at exactly the call they need to add the method to.

In julia, for a type stable function there is no runtime performance overhead of generic functions, so the language developers didn’t even make a way to make a non-generic function. This leads to the whole ecosystem taking advantage of generic functions, and we believe this has had some very positive effects.

22 Likes

No, it’s pointless to say “we will build a CAS in Julia that will be better” based on claims of language superiority. (Julia: 10,000x speedup, multi-generic support,…). SageMath does it too. (Python: everyone knows Python so we can get 10,000 high school students to write a CAS over the summer.)
If you want to build a minimal system that allows for the representation of a mathematical expression or a program, and relatively simple operations (“automatic differentiation” is one of them), then sure. It’s not a big investment in time, if you know what you are doing, and you might feel comfortable even knowing that what you are doing has been done numerous times before. (And is free, open source, etc.)

That does not generalize: because some simple operations are easy – does not mean that difficult operations are easy,. I personally doubt that there is a case that some class of programmers will become umpteen times more productive (say, in building a CAS) using Julia than when they were using some other modern language in which they are skilled and experienced.
Julia may be a fine language. I have now read a few programs, but not written any in Julia. It’s not a miracle.

2 Likes

AHA, thanks for the clarification. I did not think that opt-in was related to the use of defgeneric, which is optional. I think of Common Lisp as a multi-paradigmatic language. You have many choices. Structures, Arrays, Lists, hashtables, … you also have functions, methods, macros, compiler macros, etc.

I think the point that is being made is that at compile-time the Julia compiler may resolve a generic function call down to a specific subroutine call, a call in which type-testing is omitted, because the types can be deduced at compile time. For example, a call to add(a,b), where add is “generic” can be compiled to (say) a floating-point add instruction, if the compiler can “prove” that a and b are double-float numbers. That’s nice, and I suppose there is a choice to be made in a language design as to whether the type of a, b, etc is dynamic (unknown until runtime) or instantiated and immutable at compile time.

For a CAS, typically the type of almost every datum in some general representation is a structure of ‘type’ ‘algebraic expression’ – so generic operator dispatch might not be particularly important. A typical operation “dispatch” in Maxima (written in Lisp) might be oh, I see the list (sin (+ pi b)) so I will look at the simplification program associated with “sin” to work on this. It would be possible to define each algebraic structure as a different type depending on its “head” and thus the generic simplification program could have a type-dispatch that goes off to sin, cos, tan, plus, times, etc… But the type then would rarely be known until run-time, so what’s the point?
Oh, there are other types of algebraic representations … even in Lisp it might be a list or a vector, or perhaps an atom (symbol or built-in number type). Once you know an expression is a particular kind of number, or are otherwise “down in the weeds” I think that calling ordinary functions is not such a burden – and they can be in-lined by the compiler sometimes. Like floating add.

RJF

I think you’re imputing a more lofty claim than the claim that the Julia CAS developers are making. I’m not involved in the Julia CAS work, and I’ve only been casually following these threads, but my impression is that the primary goal is, to paraphrase, “we want the equivalent of SymPy, but faster”. The Julia CAS developers probably do have aspirations beyond that, but I don’t think anyone is claiming “we will build the best CAS that has ever been built”.

7 Likes

I quote

SymPy� is a Python library for symbolic mathematics. It aims to become a full-featured computer algebra system (CAS) while keeping the code as simple as possible in order to be comprehensible and easily extensible.� SymPy� is written entirely in Python.

Given the goal as you interpret it, it seems that time might be better spent on writing a program STJ , a SymPy to Julia translator, which then would include all the handiwork of the SymPy team, but might run umpteen times faster. I don’t know what the coding habits are for SymPy, and whether you might as well try to compile all of Python. It need not run as fast as pure Julia, but who knows.

In my experience, important tools should be applied:
First to measure performance by benchmarking and next by running a profiling program to find bottlenecks. For example, if it turns out that your most important SymPy program spends 95% of its time in a library routine from Gnu MPFR, then improving the speed of Python passing messages, calling functions, etc will not matter.

When someone says a program is 10,000x faster, my first thought is “doing what?”

What if the SymPy code was only, say 2X slower than running an equivalent algorithm in Julia, in the same problem-solving context, same computer system, … . Would you regret having spent the time reprogramming it? Of course the same question could be posed to the SymPy people — how about just wrapping up some existing CAS? As done by the SageMath people, who appear to have wrapped some version of Maxima, as an alternative to SymPy. I would not be surprised if Maxima is faster than SymPy on some tasks, though of course you could find a really slow lisp and make some slow uncompiled copy of Maxima (who knows) 10,000x slower .

RJF

This is a FAQ — you don’t get any performance boost by “transpiling” from another language, because the whole performance benefit of Julia derives from the front-end, not the back-end.

SymEngine is reportedly hundreds of times faster than SymPy for many tasks, and Symbolics.jl is reportedly matching SymEngine already on many benchmarks (but seems easier to extend and integrate with other Julia projects than SymEngine’s C++ codebase).

To quote one of the authors of SymPy and SymEngine, Python is indeed too slow, so it is hard for SymPy to compete in terms of speed. The idea that a new implementation in a compiled language can do much better hardly seems outlandish.

22 Likes

I spent 1,000+ hours over the past 10+ years using CAS (commercial & open source) to solve symbolic problems. I exchanged dozens of emails/phone calls w/ Mathematica’s tech support engineers.

My main concerns w/ current CAS is they often:
A. give more complicated solutions than necessary (eg using special functions when not needed)
B. are unable to solve problems w/ known closed form solutions
For my symbolic problems I don’t care that much about speed.
(For numeric problems I care a lot…)

I’m excited about the potential of symbolics.jl because:

  1. A very capable group of people are designing a new CAS, in a great ecosystem.
    They have 2nd mover advantage and can build on the latest literature/tech and can try to avoid as many past mistakes as possible.
  2. I believe Julia is a great language for CAS (type system, MD …)

The main reason (above) that I’m hopeful about symbolics.jl is 1 (2nd mover advantage) and not 2 (Julia language).
Even if symbolics.jl wasn’t written in Julia, I’d be excited about a new CAS by a strong group in a great ecosystem that aims to build on previous work.
@rfateman I really hope the symbolics.jl team can get constructive feedback on CAS design from you and people like you.

10 Likes

7 posts were split to a new topic: CAS benchmarks (Symbolics.jl and Maxima)

FWIW this Discourse forum supports Markdown, single backticks for inline monospace and triple backticks for

```
blocks
```

and dollar signs for \LaTeX. I realize some people prefer not to use it, just figured I’d mention it.

7 Likes

It might be worthwhile to remember that this is not about millisecond versus microsecond: Some years ago I was deriving some analytical expressions resulting from products of symbolic matrices in Matlab (mupad). The process was started in the morning, and finally by the morning the next day it choked and stopped with an error. I guess that would be the sort of problem for which one would like to see a memory-conservative and fast symbolic code.

6 Likes