I have been very excited to see the recent progress on the Julia VSCode plugin and I want to thank the people behind it for their great work.
I am particularly interested in the linter, which I think could really boost my productivity.
However, I have been trying to use it on several of my projects and I am still encountering many spurious errors. Although the situation seems to be improving with each new release, I would really like to understand better which ones of those errors are bugs that I should report and which ones are about fundamental limitations of the linter.
More precisely, I would like to know the following:
- Is there any place where the known bugs and limitations of the linter are listed? If not, I would find such a list very useful, especially for new users.
- Can you define a reasonable subset of the Julia language such that one should expect the linter to raise no spurious error on code that is limited to this subset? I am asking both about the current linter and about future iterations. For example, I would not expect the linter to deal well with
@eval. Also, it is unclear to me whether code like the following is linter friendly:
const Module = (ENV["VAR"] == "A") ? ModuleA : ModuleB
- More generally, I would be curious to know how integrated the linter and compiler toolchain are, as it would certainly shed some light on the fundamental limitations of the linter.
Generally I would look at the issues, eg
That said, linters are imperfect and you should complement them with other solutions for code QA (especially unit testing, and reviews from others if feasible).
The Github issues partially answer the first part of my question indeed (having a list of known bugs and limitations). However, I would still be interested to know more about what language constructs to avoid to make the most out of the linter.
Clearly, a linter cannot replace unit tests and code reviews. In my case, I don’t even tend to see the linter as a code QA tool but rather as a tool that gives me immediate feedback on typos and small errors as I am typing. Doing so appears to me as particularly important given Julia’s latency issues. (I know this problem is solved in part by using Revise but Revise is imperfect and my personal experience is that code refactoring is significantly more painful in Julia than in many other languages.)
Take a look at https://github.com/julia-vscode/StaticLint.jl/blob/master/src/linting/checks.jl to see how the checks are currently implemented to get a better understanding of what to expect. From what I know so far the Linter will always have a problem with generated code and macros. Personally I’d favour readability over lintability.
I think the biggest problem we have right now is that we don’t handle any even slightly more involved environment setups correctly (test folder, anyone?). We have plans to fix that, but they are not trivial to implement at all, so I think that will keep us busy for a while. So that in my mind is the “big” task we have right now that a lot of other things depend on.
The other “big” question is whether we should/can use type inference in the linter… That could potentially really help find bugs, but it is also not clear how to implement that at all.
My experience is the opposite: I find that refactoring is super-easy in Julia. Especially because of zero-cost abstraction, which allows a clean separation between interfaces and implementations. I often refactor “live”, using Revise. YMMV.