Linting stage in CI pipeline - which to use and how?

Hello, I have a CI/CD pipeline in GitLab, and I want to run a lintig stage to fail the build on detected bugs. What is the proper way to do this?

Found Lint.jl but it’s abandoned (last update 2019)

Found StaticLint.jl but the usage documentation is lacking - doesn’t say how to run linting for project and analyze results.

Found JET.jl vs Julia in VS Code but not sure if it’s the right tool and how to integrate that into a build.

Found these posts:

But neither of them mentions a way to run a command that will give a result of whether build should pass or there are serious errors detected.

I would suggest to use JuliaFormatter.jl and JET.jl.

In the following comment there’s a snippet for JuliaFormatter on GitLab CI (note the workaround for JULIA_DEPOT_PATH): Add option to ignore files matching patterns · Issue #574 · domluna/JuliaFormatter.jl · GitHub

And here is an example of running JET.jl from the command line for a project (running in the project dir.):

julia --project --eval 'using Pkg; Pkg.activate(); Pkg.add("JET"); Pkg.activate("."); using JET; @show report_package(Pkg.project().name)'

The @show is there as otherwise the output will be silenced (opposite to running in the REPL).

I am about to add JuliaFormatter and JET job templates and jobs to IHP Systems / Julia / Julia GitLab CI templates · GitLab

1 Like

Hi, thanks for the answer!

I already integrated JuliaFormatter, just used it in a separate step.

Tried using JET, but the output from report_package is very long long and complex.
It found “possible errors” which are in an external package.

  • How do I mark that package to be ignored?
  • How do I generate a summary report of how many warnings/errors did I have?
  • How do I stop the build when there are critical errors?
  1. Regarding dependencies: I know that dependencies with issues detected by JET.jl is a common concern with using JET.jl: In that case, it might be better run report_file on test/runtests.jl: Cf. Warning on analyze_from_definitions (which is the default for report_package) in Configurations · JET.jl
  2. Not sure.
  3. Check the result returned by report_package/report_file and call exit(status_code) with a non-zero status_code.

Sounds like setting target_defined_modules = true might also help you.

1 Like

How? Is there a guide to what is returned there? Couldn’t figure out from the documentation.

Yes, just found it myself too!
So limiting it to my package indeed removes the external error.

However, now I have the opposite problem :sweat_smile:
All the sample errors are already caught during package precompilation stage.
What error would not be caught by compilation but will be caught by JET.jl?

You could probably adapt this gist to your needs.

1 Like

Regarding checking result retuned by JET: Looks like the number of reports returned (somewhere) within the JET top-level result can be used to judge if errors were found by JET: JET.jl/print.jl at master · aviatesk/JET.jl · GitHub

There should be plenty of errors that JET can find, which will not be found by pre-compilation - check the docs.

1 Like


I ran it, and getting something like this:

LanguageServer.Diagnostic(LanguageServer.Range(LanguageServer.Position(2, 0), LanguageServer.Position(2, 26)), 3, missing, "Julia", "The included file can not be found.", missing, missing)
  1. Is this the format of the log messages?
  2. Does it just means the file was loaded incorrectly?

That means that an include statement on the third line of your source file couldn’t be resolved, afaict.

You are correct, this line in the file has an include("file.jl") statement.
I thought it was a matter of search path, but changing directory, or even putting the absolute path in include() doesn’t work, still gives the same error.
How would you go about debugging LanguageServer errors like this?

Hi @stemann, thanks for that reference!

I used the code from IHPSystems GitLab CI templates

Specifically this:



  errors_found = !isempty(result.res.toplevel_error_reports) || !isempty(result.res.inference_error_reports)
  exit(!errors_found ? 0 : 1)

It reports errors in an external package which are not relevant, so I set


result == "No errors detected"
result.res.toplevel_error_reports == JET.ToplevelErrorReport[]
result.res.inference_error_reports == JET.InferenceErrorReport[...errors...]

So it reports “No errors detected”, but then fails because inference_error_reports isn’t empty – it contains errors from that other package which I wanted to ignore (and not my package).

What is the proper way to fix it?

On another thought, I went to investigate the bug and fixed it instead. Now JET.jl report is clear! :partying_face:

1 Like

It seems the get_reports method should be called (it should take target modules into account), so this should fix it:



  errors_found = !isempty(JET.get_reports(result))
  exit(!errors_found ? 0 : 1)