[ANN] JETTest.jl: advanced testing toolset for Julia

Hi all, today I’d like to announce JETTest.jl, an advanced testing toolset for the Julia programming language.
It automatically detects otherwise-overlooked problems, and helps you keep your code to be robust and faster.

Currently the JETTest.jl toolset only offers dispatch analysis, which will be elaborated in the following part of this post, but I’m also planning to enrich it even more with other fancy analysis implementations. I will discuss it at the conclusion.

Dispatch Analysis

Why Dispatch Analysis ?

When Julia compiles your code but type inference was not so successful, the compiler is
likely to be unable to determine which method should be called at each generic function call-site,
and then it will be resolved at runtime. That is called “runtime dispatch”, which is known as a common source of performance problem — since the compiler can’t do various kinds of optimizations including inlining when it doesn’t know matching methods, and method lookup itself can also be a bottleneck if it happens many times.

In order to avoid this problem, we usually use code_typed or its family, inspect their output, and check if there is anywhere type is not well inferred (i.e. where is “type-instable”) and optimization was not successful.
But the problem is that they can only present the “final” output of inference or optimization, and we can’t inspect an entire call graph and may not be able to find where a problem happened and how the “type instability” has been propagated.

There is a nice package called Cthulhu.jl, which allows us to inspect the output of code_typed by descending into a call tree, recursively and interactively.
The workflow with Cthulhu is much more powerful, but still, it’s tedious.

So, why not automate it ?
Now you can use JETTest.jl’s dispatch analysis, that investigates optimized IRs and automatically detects possible performance pitfalls of your code, where the optimization was failed and/or runtime dispatch will happen.

Quick Start

@report_dispatch analyzes the entire call graph of a given generic function call, and then reports detected optimization failures and runtime dispatch points:

julia> using JETTest

julia> n = rand(Int);

julia> make_vals(n) = n ≥ 0 ? (zero(n):n) : (n:zero(n));

julia> function sumup(f)
           vals = make_vals(n) # this function uses the non-constant global variable here and it makes everything very type-unstable
           s = zero(eltype(vals))
           for v in vals
               s += f(v)
           end
           return s
       end;

julia> @report_dispatch sumup(sin) # runtime dispatches will be reported
═════ 7 possible errors found ═════
┌ @ none:2 Main.__atexample__named__quickstart.make_vals(%1)
│ runtime dispatch detected: Main.__atexample__named__quickstart.make_vals(%1::Any)
└──────────
┌ @ none:3 Main.__atexample__named__quickstart.eltype(%2)
│ runtime dispatch detected: Main.__atexample__named__quickstart.eltype(%2::Any)
└──────────
┌ @ none:3 Main.__atexample__named__quickstart.zero(%3)
│ runtime dispatch detected: Main.__atexample__named__quickstart.zero(%3::Any)
└──────────
┌ @ none:4 Base.iterate(%2)
│ runtime dispatch detected: Base.iterate(%2::Any)
└──────────
┌ @ none:5 f(%11)
│ runtime dispatch detected: f::typeof(sin)(%11::Any)
└──────────
┌ @ none:5 Main.__atexample__named__quickstart.+(%10, %13)
│ runtime dispatch detected: Main.__atexample__named__quickstart.+(%10::Any, %13::Any)
└──────────
┌ @ none:5 Base.iterate(%2, %12)
│ runtime dispatch detected: Base.iterate(%2::Any, %12::Any)
└──────────
Any

julia> function sumup(f, n) # we can pass parameters as a function argument, and then everything is type-stable
           vals = make_vals(n)
           s = zero(eltype(vals))
           for v in vals
               s += f(v) # we may get an union type, but Julia can optimize away small unions (thus no dispatch here)
           end
           return s
       end;

julia> @report_dispatch sumup(sin, rand(Int)) # now runtime dispatch free !
No errors !
Union{Float64, Int64}

With the frame_filter configuration, we can focus on type instabilities within specific modules of our interest:

julia> # problem: when ∑1/n exceeds `x` ?
       function compute(x)
           r = 1
           s = 0.0
           n = 1
           @time while r < x
               s += 1/n
               if s ≥ r
                   # `println` call is full of runtime dispatches for good reasons
                   # and we're not interested in type-instabilities within this call
                   # since we know it's only called few times
                   println("round $r/$x has been finished")
                   r += 1
               end
               n += 1
           end
           return n, s
       end
compute (generic function with 1 method)

julia> @report_dispatch compute(30) # bunch of reports will be reported from the `println` call
═════ 21 possible errors found ═════
... # many runtime dispatch reports from the `println` call
Tuple{Int64, Float64}

# let's focus on what we wrote, and filter out errors that are not interesting
julia> this_module_filter(sv) = sv.mod === @__MODULE__;
julia> @report_dispatch frame_filter=this_module_filter compute(30)
No errors !
Tuple{Int64, Float64}

@test_nodispatch can be used to assert that a given function call is free from type instabilities and it’s fully integrated with Test standard library’s unit-testing infrastructure:

julia> @test_nodispatch sumup(sin)
Dispatch Test Failed at none:1
  Expression: #= none:1 =# JETTest.@test_nodispatch sumup(sin)
  ═════ 7 possible errors found ═════
  ... # abstract call stack will be printed as shown in the first example
  
ERROR: There was an error during testing

julia> @test_nodispatch frame_filter=this_module_filter compute(30)
Test Passed
  Expression: #= none:2 =# JETTest.@test_nodispatch frame_filter = this_module_filter compute(30)

julia> using Test

julia> @testset "check type-stabilities" begin
           @test_nodispatch sumup(cos) # should fail
       
           n = rand(Int)
           @test_nodispatch sumup(cos, n) # should pass
       
           @test_nodispatch frame_filter=this_module_filter compute(30) # should pass
       
           @test_nodispatch broken=true compute(30) # should pass with the "broken" annotation
       end
check type-stabilities: Dispatch Test Failed at none:3
  Expression: #= none:3 =# JETTest.@test_nodispatch sumup(cos)
  ═════ 7 possible errors found ═════
  ... # abstract call stack will be printed as shown in the first example
  
Test Summary:          | Pass  Fail  Broken  Total
check type-stabilities |    2     1       1      4
ERROR: Some tests did not pass: 2 passed, 1 failed, 0 errored, 1 broken.

More Details

Looks useful ? You can go ahead for the documentation of dispatch analysis and see all the analysis entry points and supported configurations.

Other Analysis Ideas ?

As I said at the beginning, I hope JETTest.jl’s toolset to be enriched with other analysis implementations. Internally, JETTest.jl uses JET.jl’s pluggable analysis framework as its name implies, and it allows us to easily implement advanced code analyzer that investigates post-inference or/and post-optimization IRs.

The scope of JETTest.jl is to provide such an analysis that aims to check specific properties of a program, while JET.jl aims to be more general static code analyzer (and provide a framework of abstract interpretation based analysis).
The dispatch analysis is one example of the “specific-analysis” I think, and I wonder if there is any other useful code analysis. Please let me if you have any idea of useful code analysis :slight_smile:

47 Likes

Looks very interesting, I’ll definitely give this a try! I’m curious: how does this related to Traceur.jl:

1 Like

The idea is basically same as Traceur.jl. The main difference is that JETTest.jl directly interacts with Julia compiler’s internals while Traceur doesn’t. The another one is that Traceur investigates runtime information while JETTest analyzes your code all statically.

I’d say you can expect JETTest.jl to be more faster, and produce more “correct” analysis in a sense that it uses the same information as Julia’s native compiler.

6 Likes

How to fix this runtime dispatch error?

julia> function foo(s)
         isempty(s) && return ""
         return normpath(s)
       end
foo (generic function with 1 method)

julia> @report_dispatch foo("")
═════ 2 possible errors found ═════
┌ @ REPL[17]:3 Main.normpath(s)
│┌ @ path.jl:346 Base.Filesystem.split(path, Base.Filesystem.path_separator_re)
││┌ @ strings/util.jl:403 Base._split(str, splitter, limit, keepempty, _7)
│││┌ @ strings/util.jl:421 Base.last(r)
││││┌ @ abstractarray.jl:437 Base.lastindex(a)
│││││ runtime dispatch detected: Base.lastindex(a::Nothing)
││││└────────────────────────
│││┌ @ strings/util.jl:421 Base.first(r)
││││┌ @ abstractarray.jl:386 Base.iterate(itr)
│││││ runtime dispatch detected: Base.iterate(itr::Nothing)
││││└────────────────────────
String

julia> @code_warntype foo("")
Variables
  #self#::Core.Const(foo)
  s::String

Body::String
1 ─ %1 = Main.isempty(s)::Bool
└──      goto #3 if not %1
2 ─      return ""
3 ─ %4 = Main.normpath(s)::String
└──      return %4

Traceur.jl reports nothing as well:

julia> using Traceur

julia> function foo(s)
         isempty(s) && return ""
         return normpath(s)
       end
foo (generic function with 1 method)

julia> @trace foo("")
""

The different is because Traceur sees runtime information while JETTest analyzes your code statically.

As for the specific example, there are certainly dynamic dispatches within the Base (, which will actually be solved in v1.7 release).
If you’re not interested in type instabilities within Base, you can use the frame_filter=(sv->sv.mod===@__MODULE__) configuration.

3 Likes

Sounds great, thanks!