Hi there!
Years ago, I announced a proof of concept package to handle testing performances over versions: PerfChecker.jl
We’re reaching the point were the package is ready to be used (v"0.2.0" waiting for merge). And we would be very grateful to have some testers/users to have some feedbacks before JuliaCon. There is a lot of room for improvements, and we haven’t yet integrated all the feedbacks from last JuliaCon poster !
EDIT: I am working on the docs, so please refer to the scripts below in the meantime.
As the features evolved a bit over time, allow me to list some of them here:
- Built-in profiling over versions
- Generic Interface for extensions
- BenchmarkTools extension (benchmarks)
- Chairmarks extension (benchmarks)
- Makie extension (plotting)
- Sugar syntax with macro
- Checks run in isolated environments using Malt.jl’s processes
- Tags system to load/save results
- UUIDs for machines and checks setup to avoid redundant computations (this is an effort towards CI integration). It is the first step to save metadata.
We provide basic scripts to check performances for only 2 packages so far: GLM.jl
and PatternFolds.jl
. This is where helps to provide scripts for other packages. Hey, with a bit of luck, you might find out some performances issues! Or you can just brag how much your last version is better than before.
Let me show you below a few scripts example for PatternFolds.jl
. For each script, we also generate plots through CairoMakie.
using PerfChecker, CairoMakie
d = Dict(:targets => ["PatternFolds"], :path => @__DIR__, :tags => [:patterns, :intervals],
:pkgs => (
"PatternFolds", :custom, [v"0.2.0", v"0.2.1", v"0.2.2", v"0.2.3", v"0.2.4"], true))
x = @check :alloc d begin
# Pre check script
using PatternFolds
end begin
# check script
itv = Interval{Open, Closed}(0.0, 1.0)
i = IntervalsFold(itv, 2.0, 1000)
unfold(i)
collect(i)
reverse(collect(i))
# Vectors
vf = make_vector_fold([0, 1], 2, 1000)
unfold(vf)
collect(vf)
reverse(collect(vf))
rand(vf, 1000)
end
@info x
mkpath(joinpath(@__DIR__, "visuals"))
c = checkres_to_scatterlines(x, Val(:alloc))
save(joinpath(@__DIR__, "visuals", "allocs_evolution.png"), c)
for (name, c2) in checkres_to_pie(x, Val(:alloc))
save(joinpath(@__DIR__, "visuals", "allocs_pie_$name.png"), c2)
end
We can clearly see the drop in allocations between version 0.2.1 and 0.2.2.
If we look at benchmarks (with the following script):
using PerfChecker, Chairmarks, CairoMakie
d = Dict(:path => @__DIR__, :evals => 10, :samples => 1000,
:seconds => 100, :tags => [:patterns, :intervals],
:pkgs => (
"PatternFolds", :custom, [v"0.2.0", v"0.2.1", v"0.2.2", v"0.2.3", v"0.2.4"], true),
:devops => "PatternFolds")
x = @check :chairmark d begin
using PatternFolds
end begin
# Intervals
itv = Interval{Open, Closed}(0.0, 1.0)
i = IntervalsFold(itv, 2.0, 1000)
unfold(i)
collect(i)
reverse(collect(i))
# Vectors
vf = make_vector_fold([0, 1], 2, 1000)
unfold(vf)
collect(vf)
reverse(collect(vf))
rand(vf, 1000)
return nothing
end
@info x
mkpath(joinpath(@__DIR__, "visuals"))
c = checkres_to_scatterlines(x, Val(:chairmark))
save(joinpath(@__DIR__, "visuals", "chair_evolution.png"), c)
for kwarg in [:times, :gctimes, :bytes, :allocs]
c2 = checkres_to_boxplots(x, Val(:chairmark); kwarg)
save(joinpath(@__DIR__, "visuals", "chair_boxplots_$kwarg.png"), c2)
end
We can see that a small increase in speed (likely due to the drop in allocations, … might need log scale …)
We would be very happy with testing our tool on other packages (we do have some machines with some free resources here and there). Or have you test them directly. Scripts (and even output) can be added in PerfChecker.jl/perf through PRs.