SciLean language: Interactive and mathematicaly sound code transformations

For quite some time I have been playing around with the idea what the next generation scientific computing language might look like and I would like to share my prototype. Mainly because Julia users are the kind of people I’m targeting and you might find it interesting.

In no way I’m here to compete with Julia, this is purely experimental thing but I would like to get some feedback, opinions and some ideas on what should I focus.

The library is written in Lean 4 which is a language that allows you to give precise mathematical reasoning to your code. You are able to treat any code purely symbolically and interactively transform it. The high level vision is to provide a computer algebra system on top of your code.

Best demonstrated by an example, simulation of harmonic oscillator:

We first define system’s energy:

def H (m k : ℝ) (x p : V) := (1/(2*m)) * ∥p∥² + k/2 * ∥x∥²

Them we define what program we want to create:

def solver (m k : ℝ) (steps : Nat)
  : Impl (ode_solve (HamiltonianSystem (H m k))) := ...

This states that we want to have an ode solver of a Hamiltonian system. One of the very interesting things is that code can be purely symbolic i.e. can’t be executed but it carries meaning. Both ode_solve and Hamiltonian system are non-computable function and you turn them to executable program via code transformation. The code transformation:

def solver (m k : ℝ) (steps : Nat)
  : Impl (ode_solve (HamiltonianSystem (H m k))) :=
by
  simp [HamiltonianSystem, H]
  autograd
  autograd

  rw [ode_solve_fixed_dt runge_kutta4_step]
  lift_limit steps "Number of ODE solver steps."; admit; simp
  
  finish_impl

Going through the lines in the editor looks like this:

This roughly runs automatic differentiation to obtain Hamilton’s equations and approximates ode_solve with RK4.

A bit more complicated system n-body simulation:

def Coloumb (ε strength mass : ℝ) (x : ℝ^3) : ℝ := - strength * mass * ∥x∥^{-1,ε}
argument x [Fact (ε≠0)]
  isSmooth, diff, hasAdjDiff, adjDiff

def H (n : ℕ) (ε : ℝ) (G : ℝ) (m : Idx n → ℝ) (x p : (ℝ^3)^n) : ℝ :=
  (∑ i, (1/(2*m i)) * ∥p[i]∥²)
  +
  ∑ i j,   Coloumb ε G (m i * m j) (x[i] - x[j])

argument p [Fact (n≠0)] [Fact (ε≠0)]
  isSmooth, diff, hasAdjDiff, adjDiff

argument x [Fact (n≠0)] [Fact (ε≠0)]
  isSmooth, diff, hasAdjDiff, 
  adjDiff by
    simp[H]; unfold hold; simp
    simp [sum_into_lambda]
    simp [← sum_of_add]

Currently, you need to annotate function’s arguments manually for the AD system to generate gradients.
Annotations isSmooth and hasAdjDiff automatically generate proofs that the function is indeed differentiable and has gradient(adjDiff = adjoint of differential). I separate differential and gradient as there are some mathematical issues when working with infinite dimensional vector spaces. Also, I’m using regularized Coulomb potential, so I need to assume that the regularization parameter is non-zero [Fact (ε≠0)], without it I would get a compiler error that Lean could not generate the proof of smoothness.
Annotations diff and adjDiff generate the code for differential and gradient. If you do not like automatically generated gradient you can jump in and apply some transformations manually, see adjDiff by .... The command simp[H]; unfold hold; simp is the standard AD command but the resulting code contains multiple sums that can be fused together. This is exactly done by the commands simp[sum_into_lambda]; simp [← sum_of_add]. And for example sum_into_lambda is a user defined theorem :

theorem sum_into_lambda {X Y ι} [Enumtype ι] [Vec Y]
  (f : ι → X → Y)
  : (∑ i, λ x => f i x) = (λ x => ∑ i, f i x)
  := sorry

Where sorry is the omitted proof of this theorem. You can skip proofs if you do not have the time to do it or you are confident enough that the theorem/rewrite rule is correct.

Fun experiment if you also include Lennard Jones potential:

The main point of using Lean is that you can easily state new transformation rules. Therefore bringing the code transformation right to the users fingertips. Furthermore, you can play around with these code transformations interactively, it is really easy to add/remove new rules and this tremendously simplifies designing and debugging code transformations.

One of the great strengths of using Lean is that even though I currently skip 99% of all proofs I still have tremendous confidence in my code. This is because Lean allows me to formally state what the code is supposed to be doing and this helps a lot at organizing and designing the code.

My current focus is to get symbolic/automatic differentiation working. Why another automatic differentiation system? I’m not there yet, but I believe I will be able to do symbolic/automatic differentiation with higher order functions. In particular, reverse mode AD with higher order functions and symbolic variational calculus and I’m not aware of any system capable of doing that, like deriving Euler-Lagrange equations by differentiating action integral.

The long term vision is to build a system where you state what kind of equation you want to solve and then get into a dialog with the computer on how to actually solve it. As the system understands mathematics I can imagine it might be able to analyze your problem and offer equivalent reformulations, appropriate discretizations etc.

This project is still in very early stage, but I would love to hear your thoughts. Maybe someone will be interested and courageous enough to try it out.

13 Likes

And the project github page: GitHub - lecopivo/SciLean: Scientific computing in Lean 4

3 Likes

you are author of scilean?

Yes :slight_smile:

Nah I’m all about that healthy competition. Makes both sides motivated for improvement.

I really like that. Julia is one among many languages that can’t* achieve/perform formal verification, which was introduced to me by the authors of our Supposition.jl package, which I use a lot as it helps me catch and adapt to corner cases. And such corner cases are some combination of mathematical and programmatic. In Lean4 are the corner cases all mathematical?

What I mean by programmatic is e.g. I’ve defined my function for floating point inputs, but I have to keep NaNs in mind, which Supposition.jl kindly reminds me of often.

How’s that been going? Autodiff is really interesting in numerical languages particularly because it’s not floating-point accurate. What’s the experience been like in scilean? The GitHub page states

First class symbolic computation. Any function can be purely symbolic, functions like gradient, integral or limit are inherently non-computable. However, they carry meaning what the program should be doing and we provide tools to manipulate them or approximate them with actually computable function.

Can you explain this? I don’t fully understand. Gradients, integral, and limits don’t get computed?

Keep up the great work!

1 Like

Lean is designed to be useful for general software verification so it can definitely deal with both mathematical and programmatic corner cases.

However, in SciLean I mostly deal with mathematical as I’m only working either with arbitrary-rank arrays or generally over arbitrary vector space. They all have very clearly defined interface so all of the programmatic issue disappear.

Regarding floating point, the issue is that if you start reasoning about floating point arithmetic you get stuck on very elementary problems. I’m interested in building system that helps you to create complicated machine learning or multi-physics software so all formal reasoning is done over mathematical reals. At the end, I do not have a program that is formally verified but you are pretty sure that certain type of bugs are not there.

I cannot comment on the problem with floating points as I mentioned I’m doing all reasoning over reals and then just instantiate the program for floats.

Otherwise I think the progress has been pretty good. I’m generating reasonable derivatives for machine learning code. Also, I have ventured to niche topics like differentiating probabilistic programs or programs with parametric discontinuities. What I find particularly appealing is that adding a support for a new feature is only matter of adding a new theorem instead of hacking on guts of a compiler.

The functions like gradient, integral or limit are purely symbolic and you have to eliminate them or approximate them before running a program. In Lean there is no difference between normal and symbolic function from the point of the type system, the only difference is that the compiler refuses to compile any symbolic functions.

Maybe reading paragraph “Traditionally, there has been a clear divide between languages focusing on …” would help and I would also recommend doing through the example simulation of harmonic oscillator. This approach of mixing symbolic and numeric code is quite unusual and it might take some time to sink in.

3 Likes

I’m curious if this might be addressed by using hardware that supports more regular number systems (eg posits). Do you think that’s a possibility, or are there other ways this issue might be resolved?

You might want to have a look at this project for mix of formal verification, floating point arithmetics and numerical methods http://fost.saclay.inria.fr/ They managed to formally verify numerical solution of wave equation down to floating point.

My understanding is that there is no clear and easy way to generalize these methods to more complicated numerical methods.

My main interest is to figure out how to use Lean to help you to write programs faster rather than writing software that is completely bug free. I do not know how to automatically detect rounding errors in your code so I’m not focusing on this issue.

2 Likes