Fun One Liners

Would it be fun to have a thread where people share really fancy one liners? I think so - I think it’d be a great way to show off some of Julia to newcomers but also share some cool code. I’ll kick the thread off with 2-3,

Returns the squared Grahm aka the euclidean distance matrix of `X`.
Note: Tamas Paap correctly showed this should be two lines for performance. 
SquareEuclideanDistance(X) = ( sum(X .^ 2, dims = 2) .+ sum(X .^ 2, dims = 2)') .- (2 * X * X')


    SquareEuclideanDistance(X, Y)
Returns the squared euclidean distance matrix of X and Y such that the columns are the samples in Y.
SquareEuclideanDistance(X, Y) = ( sum(X .^ 2, dims = 2) .+ sum(Y .^ 2, dims = 2)') .- (2 * X * Y')


Sinc interpolation
Y - vector of a line shape
S - Sampled domain of Y
Up - Upsampled X vector
SincInterpolation(Y, S, Up) = sinc.( (Up .- S') ./ (S[2] - S[1]) ) * Y

Refactored From:
Note: Please include sources for where they came from if they aren’t you’re own effort!


slightly ot, but reminds me of this:

1 Like

Cool idea. Here’s a fun one from my REPL session a couple of days ago:

using CSV, Glob

# Read and concatenate all tables in CSV files matching the given `pattern`
cat_csv(pattern) = vcat(

Broadcast is great for so many things which aren’t numbers.


This is a nice example about the dangers of preferring one-liners: you seem to be calculating sum(X .^ 2, dims = 2) twice. You can rewrite this exact same algorithm as

X2 = sum(X .^ 2, dims = 2)
(X2 .+ X2') .- (2 * X * X')

but then of course it is not a one-liner anymore :wink:

That said, I would just do something like

abs2.(X .- X')

which should have better numerical properties.

I don’t think one should ever purposefully write one-liners in Julia. If a function turns out to fit on one line, fine, if it doesn’t, then it doesn’t. Trying to make it “fancy” just leads to obfuscated and occasionally suboptimal code.

For example, in

function calculate_foo_bar(X, Y)
    foo = some_complicated_expression
    bar = some_other_complicated_expression
    foo, bar

is much more readable and easier to refactor than

function calculate_foo_bar(X, Y)
    some_complicated_expression, some_other_complicated_expression

Since there is no cost to just naming partial results in variables and using them that way, it is generally preferable when it makes code easier to read.


Awe man, there goes my fun with a lecture… It’s a little bit of a hyperbole to say it’s ‘dangerous’, but yea I didn’t clean up that first function. You’re right it is more efficient to express that first function as you have but the one-line the solution you offer only works for vectors and fails on matrices.

SquareEuclideanDistance(X) = ( sum(X .^ 2, dims = 2) .+ sum(X .^ 2, dims = 2)') .- (2 * X * X')
tp(X) = abs2.(X .- X')

A = [[1 2 3 4 5]; [6 7 8 9 10]]
tp(A)#cannot broadcast the array...

If you look at the second function I posted. What you see is that the first function is a nongeneral use-case of that.

I timed the second function and it was as optimal as I could get that operation to be due to broad-casting and outperformed many variants written in python and works easily on GPUArrays. My mistake was not taking the time to work through it for the case where X == Y. Readability is important but efficiency/performance really matters when calculating distance matrices. My apologies.

So my example isn’t as nifty as yours, but my single favorite feature of Julia is the array comprehension:

my_vector = [2^i for i in 1:10]

Add to this the ability to toss in a conditional statement and a broadcasting operation and I’m a very happy coder :wink::

my_vector = [2^i ./ 2 for i in 1:10 if iseven(i)] 

There’s no rules on complexity, just fun one liners!
For fun here’s an alternative to yours that keeps things as integer types:

my_vector = [2^(i-1) for i in 2:2:10]
1 Like

For fun here’s an alternative to yours that keeps things as integer types

Hmmm, I see…so my example wasn’t type stable because 1:10 are integers, but the ./ 2 operation causes them to be converted to floats, right? I’ve seen that people sometimes go to great lengths to achieve type stability but I’m not sure what the issue is. Is it just a performance concern? (sorry to change the topic of the thread)

Would this solve the type instability issue with my above example?

[2.0^i ./ 2.0 for i in 1.0:10.0 if iseven(i)]

Type stability is a concept that is only applicable to functions, and your code isn’t one — it’s calculating a constant. BTW, the compiler can infer it perfectly fine:

julia> my_vector(n) = [2^i ./ 2 for i in 1:n if iseven(i)]
my_vector (generic function with 1 method)

julia> @code_warntype my_vector(10)
  #self#::Core.Compiler.Const(my_vector, false)

1 ─      (#17 = %new(Main.:(var"##17#18")))
│   %2 = #17::Core.Compiler.Const(var"##17#18"(), false)
│   %3 = (1:n)::Core.Compiler.PartialStruct(UnitRange{Int64}, Any[Core.Compiler.Const(1, false), Int64])
│   %4 = Base.Filter(Main.iseven, %3)::Core.Compiler.PartialStruct(Base.Iterators.Filter{typeof(iseven),UnitRange{Int64}}, Any[Core.Compiler.Const(iseven, false), Core.Compiler.PartialStruct(UnitRange{Int64}, Any[Core.Compiler.Const(1, false), Int64])])
│   %5 = Base.Generator(%2, %4)::Core.Compiler.PartialStruct(Base.Generator{Base.Iterators.Filter{typeof(iseven),UnitRange{Int64}},var"##17#18"}, Any[Core.Compiler.Const(var"##17#18"(), false), Core.Compiler.PartialStruct(Base.Iterators.Filter{typeof(iseven),UnitRange{Int64}}, Any[Core.Compiler.Const(iseven, false), Core.Compiler.PartialStruct(UnitRange{Int64}, Any[Core.Compiler.Const(1, false), Int64])])])
│   %6 = Base.collect(%5)::Array{Float64,1}
└──      return %6

Whether one prefers integers or floats in the result depends on the context, implicit type conversions can work just fine.


If you want to keep it as ints using your solution you could use ÷ (\div)

[2^i ÷ 2 for i in 1:10 if iseven(i)]

although @ckneale solution is slightly faster

Actually, it’s the / 2 that yields floats, the broadcasting dot does nothing here, since it’s pure scalar division.

There is no type stability issue, integer divided by integer gives a float, and that is perfectly predictable. The new code is float divided by float, which, predictably, gives a float.

Type instability means that the compiler is unable to predict the types in your code, because they can change based on the value of your inputs (as opposed to their types). So both of your expressions were type stable.

Edit: BTW, iseven only works for Integers not floats.


@stevengj posted a nice one liner in How to use minimum and maximum for a Vector{SVector{3}}. The idea is to find the smallest component of the first, second, third … element in a vector of vectors.

A = [rand(3) for i = 1:10]
reduce((x,y) -> min.(x,y), A)

It is fast and works for other types to

A = [Tuple(rand(3)) for i = 1:10]
reduce((x,y) -> min.(x,y), A)
using StaticArrays
A = [SVector{3}(rand(3)) for i = 1:10]
reduce((x,y) -> min.(x,y), A)
while true; end


Here’s one for you Chris,

f(x) = f(x); f(Inf)

here we go (actually not a julia and only linux):

run(pipeline(pipeline(pipeline(pipeline(`cat /dev/urandom`,`hexdump -v -e '/1 "%u\n"'`),`awk '{ split("0,1,2,4,8",a,",");for (i = 0; i < 1; i+= 0.0001) printf("%08X\n", 100*sin(1382*exp((a[$1 % 8]/12)*log(2))*i)) }'`),`xxd -r -p`),`aplay -c 2 -f S32_LE -r 30000`))
1 Like

here’s one of my favorites… Just came into use in my new package :smiley:

#Approximate the derivative of an arbitrary edit - complex analytic function(fn) at a given point x.
ComplexStepDerivative(fn, x, eps = 1e-11) = imag( fn( x + (eps * 1.0im)) ) / eps

Example usage:

using Plots
Plots.plot( ComplexStepDerivative.(cos, 0:(pi/100):2*pi), label = "Complex Step Derivative" );
Plots.plot!( -sin.( 0:(pi/100):2*pi ), label = "Analytic Derivative", legend = :bottomright )

That is a neat trick, though ForwardDiff.derivative would be better — it’s almost the same trick numerically but using dual numbers rather than complex numbers for the automatic differentiation. This is more principled and has many tricks to make it more robust:

ForwardDiff.derivative.(cos, 0:(pi/100):2*pi)

To extract the type in a Union

extt(::Type{Union{Missing, T}}) where T = T

there is not ForwardDiff in Octave, and that works in Octave hahaha

There was a discussion about the complex step method a while ago here:

It is a neat trick, but it only works for complex analytic functions, which essentially rules out all nontrivial programs. Moreover, it can just fail silently (without erroring), which is a debugging nightmare.

So I don’t think it is something one would use in practice in any language. Incidentally, if a language for scientific computing doesn’t allow a disciplined AD implementation in 2019, prospects for that language are quite grim.