How about TreeView.jl for visualizing expressions, debugging macros etc. This prints nice text trees of code expressions.
And maybe CSE.jl for automatically optimizing naive macro output? This does common subexpression elimination, so that you don’t evaluate things multiple times.
All relevant for doing fast math operations. Tullio is for fast tensor contraction type operations using Einstein notation, can make what would normally be two or three level nested loops into a single line of code and with fast threading and vectorized instructions etc
Loop vectorization handles using advanced SIMD type instructions in loops, speeding up execution, is used by Tullio afaik.
LazyArrays goes in a different direction, it can reduce memory footprint of large array calculations by avoiding to store intermediate values etc. You can write code as if you had big arrays in memory but they will be calculated on the fly rather than stored.
BenchmarkTools is huge! Can’t believe I forgot that one.
It’s the canonical way to figure out how fast functions are, runs code multiple times, computes useful statistics and avoids misleading compile overhead.
Accessors.jl is a way to create a new immutable struct whose entries are all the same as a given template struct except those values you want changed and give a new value for. Let’s you easily use a “mutable style” to your code.
SnoopCompile.jl seems to be getting more important with every version, so will belong on this list soon if it’s not already on.
I’m not sure how widespread Aqua.jl is in use, but I do want to mention a code quality related package, and that seems a good choice.
And Requires.jl is an important one to know. Even if the current practice is to try to avoid it where possible, a lot of existing code does make use of @require lines.
This provides a big list of standard data structures and related algorithms like what you might find in a college course on data structures and algorithms. So if you need to implement a set of objects or a queue or priority queue or circular buffers or etc don’t reimplements with bugs just go to the standard place
This implements a kind of pipeline of efficient operations on data, with possibilities for multi threading and other very good composable properties.
Memoization Memoization · Julia Packages to avoid recomputing expensive functions. Particularly good if you are computing values of a function as some new update to an earlier computed function.
And probably one of the advanced threading libraries, but what’s the right one?
Could you please provide a summary for each recommended package saying why “they make Julia easier to use”. A sales pitch to motivate potential users. Thanks.
Allows to reduce latency of commonly used package / workflow first execution by storing Julia’s state in so called “sysimage”.
Also permit to compile julia application executable without Julia installed, and make relocatable C librairies bundle form of Julia code
Of very worthy note is StaticArrays.jl. This package allows for small, statically sized arrays. Because their size is known at compile time all sorts of optimisations and efficiencies kick in. Linear algebra operations, for example, are customised at compile time for the specific size of your matrix or array. For me, however, the biggest win is that static arrays are isbits, meaning these will be allocated on the stack not the heap, so these are ideal for cases where you’re working with small arrays of a known size in hot loops. And since they’re isbits they can also be assigned as elements of much larger arrays (arrays of arrays). Finally, special mention of the FieldArray type for adding static arrays magic to your own data types.
PkgDependency.jl and AutoSysimages.jl are recent favorites for me:
The former is really helping me track down indirect dependencies that are heavy and were added by 3rd-party packages without second thoughts. The latter is trying to address our biggest technological issue for wider adoption of the language.