Would it make sense for Julia to adopt refinement types?

Refinement types enrich a language’s type system with logical predicates that circumscribe the set of values described by the type, thereby providing software developers a tunable knob with which to inform the type system about what invariants and correctness properties should be checked on their code.

Rust and some other programming languages are experimenting with them.

Would it make sense for Julia to have them?

2 Likes

Sounds like what we call “interfaces” (e.g. AbstractArray), and although there is no official formalism, there are several attempts worth mentioning like Interfaces.jl or RequiredInterfaces.jl

6 Likes

I may be wrong, but it seems that refinement types restrict the range of possible values of a type, while Interfaces.jl or RequiredInterfaces.jl help to restrict types that do not have certain methods.

For example, refinement types help to restrict the range of acceptable values that the absolute function can return.

#[flux::sig(fn (n:i32) -> i32{v:0<=v && n<=v})]
pub fn abs(n: i32) -> i32 {
    if 0 <= n {
        n
    } else {
        0 - n
    }
}

https://flux-rs.github.io/flux/blog/01-introducing-flux.html

2 Likes

That is correct, but since refinement types are much more complicated to both implement & use than more simple interfaces (which we’d still need either way), I think the point of @gdalle is that the current problems in the Julia ecosystem would not be addressed by just adding refinement types.

Yes, that is correct. Note that the example given doesn’t translate to Julia, because of the underflow on 0 - typemin(Int32). In Julia, this is always defined as wraparound arithmetic, while IIRC this panics in Rust in debug mode.

EDIT:

In release mode, rustc doesn’t even want to compile this: Rust Playground

3 Likes

Understood. Refinement types wouldn’t solve any problems that the Julia ecosystem has, but they would open up Julia to new applications like Liquid Haskell did in the Haskell ecosystem.

1 Like

Personally I’m quite a fan of the idea of refinement types in Julia, and find the paper you linked rather compelling.

We often write code that requires positive, even, or non-empty values, and so a type system that allows for this level of detail to be specified both seems like a route for a higher standard of robustness that is possible to be specified using “plain” old interfaces.

The way I like to look at refinement types is that they’re compile-time contracts. My understanding is that the main complexity as a result is that suddenly you then need something like Z3 in the compiler, and describing that as a tough sell seems like an understatement.

For a quick intro and a motivating example, I’d recommend this short article introducing Haskell’s refined library: Announcing the refinement types library – Functional programming debugs you

6 Likes

I’m understanding 10% of this at most, but I think I’m seeing at least 2 very different approaches going on. Rust’s seems to leave the original type alone, but the compiler errors when an asserted condition of its values can’t be proven. Haskell’s on the other hand wraps the type into a separate Refined type, and you must unrefine it to extract the value for normal computation. Apparently unwrapping is done at compile-time (no runtime cost), and the wrapping and verification can be done at compile-time with refineTH; there is also refineTH_ to refine then unrefine immediately, which I’m interpreting as a compile-time assertion.

Either way, it sounds like static analysis and AOT compilation, so I’m not sure how it’ll fit well into Julia. The Rust code examples I (think I) understand rely on computation of values knowable at compile-time; does that mean we need runtime verification otherwise, especially in interactive sessions? I’m definitely too unfamiliar with the topic and languages to know how formal verification is used and how refinement types play into that.

2 Likes

I would love to see refinement types myself. Having tinkered with Ada/SPARK in the past, I am convinced this is great and straightforward way to avoid many bugs.

Once the annotations are in place, a lot can be done to improve program quality, from injecting dynamic tests, to a full formal verification of the entire program.

It is not clear if there is a single best approach, probably a mix of different approaches is best.

As a first step, we’d need a way to attach additional specifications to variables or return values of functions, such as

  • range
  • set membership
  • arbitrary predicates

I don’t think there is currently a way to add attributes to variables, but the user-facing syntax could look like this:
@annotation 0 < x < 10
@annotation myPredicate(x) == true
The annotations could also go into docstrings:

"""
N counts the 
@annotation n > 0
"""
n::Integer

Based on these annotations, libraries could generate tests, add dynamic checks, or try to statically prove the assertions.

Such checks would not have to be part of the core, different verification tools could use different approaches. As an example, one tool could verify ranges, while another tool could verify physical units.

Some libraries to study for inspiration:

https://docs.adacore.com/spark2014-docs/html/ug/en/source/type_contracts.html#predicates

These annotations could also be used, very naturally, to implement design by contract:

https://www.cs.cmu.edu/~csd-phd-blog/2023/rust-verification-with-verus/

Variable annotations would also nicely enable design by contract

Additionally, they would also be useful for physical units, e.g. through Unitful.jl.

5 Likes

The authors of Flux made a presentation last year.

1 Like

Stumbled on an example where it seems like refinement types would be useful to improve Julia’s type inference:

f(t::Tuple, n::Bool) = t[10 + n]

Julia would ideally be able to prove that the below abstract call is type stable, but it currently fails:

julia> Core.Compiler.return_type(f, Tuple{Tuple{Bool,String,Vararg{Int}},Bool})
Union{Bool, Int64, String}

The call will always return Int (if it returns), because the index 10 + n is always either 10 or 11, thus the index is always greater than two.

I guess Julia isn’t able to prove this because, after n gets converted to Int, Julia has forgotten about the fact that n can only take two values: zero or one. Some kind of refinement types (perhaps purely for internal use in the compiler) could presumably help Julia remember that the result of convert(::Type{Int}, ::Bool) is always nonnegative, and thus that 10 + n is always greater than two.

EDIT: issue on Github:

3 Likes

As a sidenote, I understand formal verification is not possible in Julia and Supposition.jl seeks to assist in spite of that.

Instead of applying more restrictions to the types you implement, you can alternatively apply appropriate tests to check your algorithms output numbers they way you expect using Supposition.jl. Property-based fuzzy testing.

The core issue here is that our lattice only has Const and PartialConst, but not “Set of Values” or “Range of Values” representation.

So we can represent Union{Int, Float}, but not Union{Const(false), Const(true)}.

1 Like

IIRC @Sukera had a brief look at Refinement Types in Julia, and may have run into exactly that limitation with the lattice.

1 Like

Nah, I was experimenting with an entirely separate lattice. Regular inference was already taking care of Const and ParticalConst. The problem was that the abstract interpreter, at the time, did NOT like having to lug around that separate lattice while also doing its own inference, and the docs about how to extend that were (are?) sparse at best.

I still think it would be useful to stabilize that interface further, which would potentially open up a whole bunch of likely useful analysis & optimization. Maybe things are better in the future, there were some interesting talks in that direction (not refinement types in particular) at this years’ juliacon (specifically, this and this talk have given me some ideas for projects).

All that being said - I don’t think adopting refinement types at the core level & by default would be a boon. It’s just way too costly for the sideeffect-happy, non-trivial, non-functional code that is idiomatic in Julia. IMO it’d be more useful as a distinct check as part of a testsuite.

1 Like

isn’t Val meant to do more or less this ? bringing values to the type system ?

In what sort of sense? It seems to me the main thing you’d be missing out on for casual script-y code is potential optimisations and no-throw-type guarantees.

TL;DR of the below: In pretty much every sense, computational & dev time.


It’s as you say above - to handle the general case, you need an SMT solver like z3 at potentially every dispatch. Not to mention that these sorts of things really dislike modelling mutability, because that introduces a temporal component. Solvers don’t like temporal components (or rather, YOU shouldn’t like temporal components here), and now you suddenly need to add invariants manually to help the solver along. Not a good fit for script-y code.

Btw, even the functional languages struggle with these temporal invariants. Just take a look at how much additional code you need to prove leftpad! And that’s just for an innocent little function like leftpad, just imagine how much more you’ll need for something that you actually care about, nevermind having the solver actually check that the model works out.

And then there’s the horror of actually composing (temporal) specifications. This isn’t (to the best of my knowledge) automatable.

Yes, sort of - but even then you can easily get Any back out and you still don’t have mutability. Also, lots of things that you’d think you can lift like that you actually can’t; like String (it’s technically mutable but we all pretend otherwise), structs containing Symbol, and the like. Lifting things to type parameters very quickly runs into inference just giving up and dealing with it “later”. So that doesn’t help with actually proving properties about your code either.

3 Likes