Towards Rust-level feedback on type stability

I am thinking about the best way to get continual feedback with more detailed type hints in VSCode. Similar to how it is done with rust-analyzer.

Background

For those who haven’t used Rust, here’s what the feedback loop looks like when you are coding:

I really want to have the same thing in Julia. I want the IDE to immediately highlight type instability while I am writing the function, rather than me needing to debug it later. Type instability should be automatically apparent with an ugly warning by my IDE.

Now, since Julia is very dynamic, this is obviously a bit trickier. Right now we can get something similar to Rust, via connect an external REPL, and running Cthulhu descend on a function (thanks to @Zentrik and @pfitzseb’s work on this with @tim.holy’s Cthulhu). This looks like the following:

which I can set up with:

  1. Launch REPL in my Julia project
  2. VSCode → Command Palette → Connect External REPL, and paste into REPL
  3. Load my package as well as Cthulhu
  4. Instantiate an example input
  5. Call @descend rand(tree) and descend once to enter the function shown in the screenshot.
  6. Types are now displayed on this function in VSCode.

This is awesome but I am wondering if we can do even better. I want type information to automatically show up while I am typing out the function, so I can immediately and effortlessly be alerted to type instability, rather than needing to manually descend to find it (which is still not convenient enough to run on all functions; I have to only descend on functions that show up on profiler results)

Proposal

Here’s one idea that I want opinions on. Could we have a special comment that tells Julia LSP to run Cthulhu (or simply @code_warntype) for the function with a given example input?

For example:

function foo(x)
    out = []
    for xi in x
        push!(out, xi)
    end
    push!(out, "bar")
    out
end
#! descend: ([1, 2, 3],)

This “descend” would tell the Julia language server to continually run Cthulhu.descend(foo, ([1, 2, 3],)) whenever the code is changed. (Or just @code_warntype, but I feel like multiple levels could be useful)

The descend could also be more complex to allow different input types. For example,

foo(s, x) = do_something(s) + x
#=! descend:
using MyPkg

(MyPkg.MyStruct(), "foobar")
=#

I think the comments are the right place to put this kind of information, because this is where you control linters and formatters. And it also nicely slots into the developer experience, so I don’t have to also set up some script that runs such things (which discourages me from doing so) – I can just do it right in the same script with almost zero effort.

Other ideas:

  • Maybe there can also be some way to specify the module context, with the top-level Main being the default.
  • In the future I think you could also specify multiple descend. In VSCode I believe there is the ability to cycle between code warnings for a given function (?) which would let you quickly switch between seeing the warnings for different example inputs. The key point is that you want these to show up immediately while writing the code, rather than sometime after.
  • One issue with this scheme is that beginners, the most likely victims of type instability, might not know about this. Rust is nice in this way because it requires zero configuration other than installing rust-analyzer. But I’m not sure what other option we have unless the beginner was to specify explicit types in the signature. Maybe the auto-complete template for a function call could generate this or something.

Interested in other ideas on this. Basically I am trying to figure out how to make Julia’s dev experience offer as much feedback as Rust even with its extra dynamicism.


Edits:

  1. Changed syntax to use #! descend: rather than just # descend:
  2. Added block version with #=! descend:
33 Likes

Cross posting from Immediate, Rust-level type hints · Issue #3597 · julia-vscode/julia-vscode (github.com)
This is along the lines of what I wanted to do initially, but one reason I switched was that there was a desire to not load/ execute user code in the LSP.

This seems like a good idea if we can deal with that.
Alternatively, the extension could just take the descend call you wanted to run, e.g. Cthulhu.descend(foo, ([1, 2, 3],)) and run it in the REPL in the background. I think this should work and achieve a similar result. (EDIT: I’m less confident in this approach now as this assumes that the foo has already been defined in the REPL so we would need some way to automatically define foo and presumably it would need to be defined in a separate REPL from the normal REPL otherwise the user would be able to observe that)

The other reason I switched to using the REPL was so we could use variables and redefined function from the REPL but with the alternative approach maybe we could continue to support that. Or at worst we don’t but you can just run 'descend like we do currently.

Another concern would be if Cthulhu would be fast enough, iirc the main reasons why it’s slow is pretty printing types and compilation. Presumably if we’re rerunning Cthulhu a lot we could ensure it’s caching things appropriately so that it is able to be fast.

Also, I’m not sure a comment in a file is the best place for this to be in the long term but we can of course change it at some point. Whilst Cthulhu only supports descending in a single file for now, if it did support multiple files it could get tricky to know which descend to run if there are multiple or to know where which file the descend comment is in.

To get @descend to automatically descend on all (statically) called functions, someone would need to tackle Show Inlay Hints in VSCode for all open files · Issue #536 · JuliaDebug/Cthulhu.jl (github.com). I don’t think it should be particularly difficult.

I believe there are plans to introduce a main/ entrypoint function for static compilation so the automatic option could be to descend on that and then let users override that.

I imagine Cthulhu doesn’t work if your functions aren’t finished, e.g. you haven’t written an end yet but maybe JuliaSyntax can fix those errors before passing to Cthulhu.

Having something like that would be really cool. The definition of the calls could follow something of the @testitem structure. Something that could be placed clearly alongside the function:

foo(x) = x .+ 1

@test_inferred "foo" begin
    foo([1])
    foo(Any[1])
end

such that the calls of foo defined in the block could be used for testing inference and at the same time feeding the data to the IDE.

1 Like

Maybe a macro like @lmiq suggested would be better. However I don’t think it should be parsed by anything other than the LSP, which makes it a bit tricky. (what macro symbol would you use? And would you have to depend on some package to get it?)

This is why JuliaFormatter has these comment-based commands like

#! format: off
...
#! format: on

so you don’t need to have your package depend on JuliaFormatter to disable/enable specific behavior for blocks of code, you can just leave it in a comment and have the formatter itself parse the contents.

In general I really think that it would be better to have these type inference annotations placed write next to the function definition itself. If I have to keep track of some other file that has all the inference statements, I will probably end up just not doing it. It needs to be *right there* for me (and likely other users) to consistently want to use it.

For testing, I can always just use Test.@inferred within my test file. But this proposal would be exclusively for type analysis in the IDE.

So I guess I might prefer the comment-based version?

1 Like

@code_warntype seems very fast in comparison; maybe that by itself could be a first-order thing to try out for this? I’m not sure the differences here and why Cthulhu takes much longer.

We have thought about something like this before, but it’s never really gone anywhere due to no-one interested really having time to push this forward.

Generally, I really don’t want the normal LS process loading any user or package code. That said, we can create an additional communication channel from the LS process to the user REPL (potentially a bad idea, because people tend to react badly if their editor takes over their REPL for some computation) or even an extra “untrusted” process that’s specifically tasked with loading and analyzing user code.

Anyways, let’s talk about implementation with that out of the way. I can think of roughly three iterations here, with increasing “correctness” and complexity.

1. REPL + Revise + LS

Requirements: Revise and Cthulhu are being used, RPC channel between LS and REPL process. The user must ensure that the file they’re currently editing is tracked by Revise.

You’ll get hints when you save a file, because then Revise will re-eval the function you’re currently editing. The LS can send a request to Cthulhu in the REPL process to descend into that method (don’t even need that additional entrypoint info, provided there’s a sane type signature).

2. REPL + LS

Requirements: Cthulhu is loaded in the REPL, RPC channel between LS and REPL process.

Same as above, but the LS extracts the method definition and evaluates it in the REPL context on demand. This is a lot trickier trickier than the above, because we also need to ensure all relevant context is available and up-to-date.

3. LS + sidecar process

Requirements: Extra sidecar process for the LS that is allowed to load user code (+ RPC channel).

Actually very similar in complexity to 2., but also

  • doesn’t co-opt the user’s REPL
  • can potentially be very memory/CPU hungry

All in all, 1. seems pretty feasible and might be a good-enough experience imo.

5 Likes

Just to bring up this point:

I can understand the security implications, but what about having it be opt-in? For comparison, rust-analyzer actually does load and execute arbitrary user code: User Manual. I think it’s just hard to get full type information without doing this.

the @testitem framework exactly provides the functionality to put the tests just side by side the function, and they appear in the VSCode test suite. It is so much more convenient than the default test organization. I was thinking of something similar.

I was thinking while coding some non-critical stuff here, and I realized that I do not want this type of annotation in VSCode for every function. Rather I would want it to be an opt-in for some specific functions for which type-stability is important.

1 Like

I’m less worried about security, and more about stability. Though tbf, this might be less of an issue by running the LS-internal functions in a fixed world, so that user code can’t mess with ours. There’s still a chance to crash or deadlock the process though.

2 Likes

Obviously this is your call if you’re the implementer but I must say from a user perspective (3) sounds much better.

I would prefer to offload all the complexity of IDE state management onto you in your own process, rather than having to think how my REPL state is going to affect the state of my IDE and vice versa.

5 Likes

Makes sense to me. And I guess the security wouldn’t even be an issue since there are already many known ways to do code injection in VSCode (hence the “Do you trust the authors…” pop up). i.e., no harm from having it be the default.

And like @jar1 I really like option 3. I think you could use OS utilities to limit the total CPU and memory usage of the process, which could be user-configurable.

This is much nicer as a default. The fewer barriers in the way of me finding type instabilities, the better.

Aside on the reason this is so important – it reminds me of this story from ‘Atomic Habits’ when the author is discussing how tiny changes to a product can introduce massive behavioral change:

The analogy here being that personal hygeine is sort of like type stability. The ‘fresh smelling soap’ would be analogous to automated type stability validation :smiley: In which case the entire Julia community might have much lower rates of type unstable code!

Purely from that tiny change, and lowering the effort by some small %.

6 Likes

I would also much prefer option 3., but certainly don’t have the time to implement it at this time.

Option 1. is much more realistic at the moment, unless someone volunteers to do the design and implementation work necessary for 3 (both of which I’d be more than happy to help with).

Edit: That said, I’m not sure option 3 is the right thing to spend a massive amount of time on. Ideally we’d have the same hooks into the compiler pipeline as rust, which would simplify much of the LS implementation (I’d love to re-use lowering, scope-analysis etc); trying to solve the problem from that direction seems more promising to me.

5 Likes

Thanks for explaining. Option 1 sounds fantastic too if you are up for it!

Great point. Although I guess this sounds like a longer term project (?). Certainly would result in better features in the end though.

Isn’t Julia LS transitioning to using JuliaSyntax? What are the benefits of Julia LS using JuliaSyntax? I know that Rust LS improved when it adopted ideas that JuliaSyntax implements.

Anyway, Rust forces developers to annotate functions, which makes your LS’s job easier. So I wonder if it wouldn’t be better to compare with an LS from a programming language that doesn’t force developers to annotate functions.

1 Like

@pfitzseb if you have a chance do you think you could describe how individuals might contribute to this?

That said, I’m not sure option 3 is the right thing to spend a massive amount of time on. Ideally we’d have the same hooks into the compiler pipeline as rust, which would simplify much of the LS implementation (I’d love to re-use lowering, scope-analysis etc); trying to solve the problem from that direction seems more promising to me.

I think if you could try to split this up into a list of tasks that individuals could contribute to, we might actually be able to recruit a lot of volunteers, given how awesome the final result would be.

Perhaps you could start by coding up a framework or outline for how this would work, which would depend on undefined functions with specific tasks, and people could contribute those individual functions to it?

Even now I feel a very strong urge to try to get this working so that I can use it in my own workflow… But I don’t realistically have time to do the whole thing – maybe just 10 hours of work (which includes learning the existing codebase). So how could I effectively contribute those 10 hours to help get this working?

A similar constraint probably applies to other volunteers as well. In other words, say that you had 5 people contribute 10 hours of work each, how would you split up this task?

4 Likes

I put up the fe/jet branch of LanguageServer.jl some time ago which includes diagnostics from JETLS. For example:

16 Likes

This is great @fredrikekre! Are you planning to submit a PR for this?


Slightly unrelated… based on a suggestion of @caleb-allen it in the other thread – I think it would be great to have some stable keyword (/ “effect”?) in Julia that would throw an error if there is any runtime dispatch happening anywhere within the encapsulated code.

For example,

@stable function f(x)
    if x < 0
        return abs(x) * 2.0
    end
    return x
end

in which case

f(-1)

would throw an error, rather than silently do runtime dispatch in the background.

Could also have it work on a block of code like

@stable begin
    # Any runtime dispatch here (or any downstream function)
    # would trigger an error
end

While it’s true there is Test.@inferred, this only looks at type instabilities in the return value.

4 Likes

Hacking on an implementation of @stable in this thread: Improving speed of runtime dispatch detector

1 Like