Hello, I have a CI/CD pipeline in
GitLab, and I want to run a lintig stage to fail the build on detected bugs. What is the proper way to do this?
Lint.jl but it’s abandoned (last update 2019)
StaticLint.jl but the usage documentation is lacking - doesn’t say how to run linting for project and analyze results.
JET.jl vs Julia in VS Code but not sure if it’s the right tool and how to integrate that into a build.
Found these posts:
linter-julia is actually not in Juno’s ecosystem at all – its implementation is pretty old and imho I don’t recommend using it for now.
We’re planning to implement our own linter based on CSTParser.jl, but it’s not our top priority and so I can’t provide an estimate.
So in conclusion there is no (static-)linter in Juno for now, but with Juno’s interactive code execution feature you can easily find syntactical misses. If you really want a linter,
vscode-julia extension will do that fine.
Jeremy Howard’s reply to @viralbshah’s question about 2022 goals for Julia, one thing I would like to figure out how to make it easier to write and deploy command-line scripts written in Julia.
I think the key things to figure out are:
How should such a script be packaged? The obvious answer I can think of is as a regular Julia package, with a bin subdirectory containing the necessary scripts. This pattern is already used by some packages, but there are certain conventions that s…
But neither of them mentions a way to run a command that will give a result of whether build should pass or there are serious errors detected.
I would suggest to use
JuliaFormatter.jl and JET.jl.
In the following comment there’s a snippet for JuliaFormatter on GitLab CI (note the workaround for
Add option to ignore files matching patterns · Issue #574 · domluna/JuliaFormatter.jl · GitHub
And here is an example of running JET.jl from the command line for a project (running in the project dir.):
julia --project --eval 'using Pkg; Pkg.activate(); Pkg.add("JET"); Pkg.activate("."); using JET; @show report_package(Pkg.project().name)'
@show is there as otherwise the output will be silenced (opposite to running in the REPL).
I am about to add JuliaFormatter and JET job templates and jobs to
IHP Systems / Julia / Julia GitLab CI templates · GitLab
Hi, thanks for the answer!
I already integrated JuliaFormatter, just used it in a
Tried using JET, but the output from
report_package is very long long and complex.
It found “possible errors” which are in an external package.
How do I mark that package to be ignored?
How do I generate a summary report of how many warnings/errors did I have?
How do I stop the build when there are
Sounds like setting
target_defined_modules = true might also help you.
Check the result returned by
report_file and call
exit(status_code) with a non-zero
How? Is there a guide to what is returned there? Couldn’t figure out from the documentation.
Yes, just found it myself too!
So limiting it to my package indeed removes the external error.
However, now I have the opposite problem
All the sample errors are already caught during package precompilation stage.
What error would not be caught by compilation but will be caught by JET.jl?
You could probably adapt
this gist to your needs.
Regarding checking result retuned by JET: Looks like the number of reports returned (somewhere) within the JET top-level result can be used to judge if errors were found by JET:
JET.jl/print.jl at master · aviatesk/JET.jl · GitHub
There should be plenty of errors that JET can find, which will not be found by pre-compilation - check the docs.
I ran it, and getting something like this:
LanguageServer.Diagnostic(LanguageServer.Range(LanguageServer.Position(2, 0), LanguageServer.Position(2, 26)), 3, missing, "Julia", "The included file can not be found.", missing, missing)
Is this the format of the log messages?
Does it just means the file was loaded incorrectly?
That means that an
include statement on the third line of your source file couldn’t be resolved, afaict.
You are correct, this line in the file has an
I thought it was a matter of search path, but changing directory, or even putting the absolute path in include() doesn’t work, still gives the same error.
How would you go about debugging
LanguageServer errors like this?
@stemann, thanks for that reference!
I used the code from
IHPSystems GitLab CI templates
errors_found = !isempty(result.res.toplevel_error_reports) || !isempty(result.res.inference_error_reports)
exit(!errors_found ? 0 : 1)
It reports errors in an external package which are not relevant, so I set
result == "No errors detected"
result.res.toplevel_error_reports == JET.ToplevelErrorReport
result.res.inference_error_reports == JET.InferenceErrorReport[...errors...]
So it reports “No errors detected”, but then fails because
inference_error_reports isn’t empty – it contains errors from that other package which I wanted to ignore (and not my package).
What is the proper way to fix it?
On another thought, I went to investigate the bug and fixed it instead. Now JET.jl report is clear!
It seems the
get_reports method should be called (it should take target modules into account), so this should fix it:
errors_found = !isempty(JET.get_reports(result))
exit(!errors_found ? 0 : 1)