Would it be fun to have a thread where people share really fancy one liners? I think so - I think it’d be a great way to show off some of Julia to newcomers but also share some cool code. I’ll kick the thread off with 2-3,

"""
SquareEuclideanDistance(X)
Returns the squared Grahm aka the euclidean distance matrix of `X`.
Note: Tamas Paap correctly showed this should be two lines for performance.
"""
SquareEuclideanDistance(X) = ( sum(X .^ 2, dims = 2) .+ sum(X .^ 2, dims = 2)') .- (2 * X * X')

"""
SquareEuclideanDistance(X, Y)
Returns the squared euclidean distance matrix of X and Y such that the columns are the samples in Y.
"""
SquareEuclideanDistance(X, Y) = ( sum(X .^ 2, dims = 2) .+ sum(Y .^ 2, dims = 2)') .- (2 * X * Y')

"""
Sinc interpolation
Y - vector of a line shape
S - Sampled domain of Y
Up - Upsampled X vector
"""
SincInterpolation(Y, S, Up) = sinc.( (Up .- S') ./ (S[2] - S[1]) ) * Y

This is a nice example about the dangers of preferring one-liners: you seem to be calculating sum(X .^ 2, dims = 2)twice. You can rewrite this exact same algorithm as

I don’t think one should ever purposefully write one-liners in Julia. If a function turns out to fit on one line, fine, if it doesn’t, then it doesn’t. Trying to make it “fancy” just leads to obfuscated and occasionally suboptimal code.

For example, in

function calculate_foo_bar(X, Y)
foo = some_complicated_expression
bar = some_other_complicated_expression
foo, bar
end

is much more readable and easier to refactor than

function calculate_foo_bar(X, Y)
some_complicated_expression, some_other_complicated_expression
end

Since there is no cost to just naming partial results in variables and using them that way, it is generally preferable when it makes code easier to read.

Awe man, there goes my fun with a lecture… It’s a little bit of a hyperbole to say it’s ‘dangerous’, but yea I didn’t clean up that first function. You’re right it is more efficient to express that first function as you have but the one-line the solution you offer only works for vectors and fails on matrices.

If you look at the second function I posted. What you see is that the first function is a nongeneral use-case of that.

I timed the second function and it was as optimal as I could get that operation to be due to broad-casting and outperformed many variants written in python and works easily on GPUArrays. My mistake was not taking the time to work through it for the case where X == Y. Readability is important but efficiency/performance really matters when calculating distance matrices. My apologies.

For fun here’s an alternative to yours that keeps things as integer types

Hmmm, I see…so my example wasn’t type stable because 1:10 are integers, but the ./ 2 operation causes them to be converted to floats, right? I’ve seen that people sometimes go to great lengths to achieve type stability but I’m not sure what the issue is. Is it just a performance concern? (sorry to change the topic of the thread)

Would this solve the type instability issue with my above example?

Type stability is a concept that is only applicable to functions, and your code isn’t one — it’s calculating a constant. BTW, the compiler can infer it perfectly fine:

Actually, it’s the / 2 that yields floats, the broadcasting dot does nothing here, since it’s pure scalar division.

There is no type stability issue, integer divided by integer gives a float, and that is perfectly predictable. The new code is float divided by float, which, predictably, gives a float.

Type instability means that the compiler is unable to predict the types in your code, because they can change based on the value of your inputs (as opposed to their types). So both of your expressions were type stable.

Edit: BTW, iseven only works for Integers not floats.

here’s one of my favorites… Just came into use in my new package

#Approximate the derivative of an arbitrary edit - complex analytic function(fn) at a given point x.
ComplexStepDerivative(fn, x, eps = 1e-11) = imag( fn( x + (eps * 1.0im)) ) / eps

That is a neat trick, though ForwardDiff.derivative would be better — it’s almost the same trick numerically but using dual numbers rather than complex numbers for the automatic differentiation. This is more principled and has many tricks to make it more robust:

There was a discussion about the complex step method a while ago here:

It is a neat trick, but it only works for complex analytic functions, which essentially rules out all nontrivial programs. Moreover, it can just fail silently (without erroring), which is a debugging nightmare.

So I don’t think it is something one would use in practice in any language. Incidentally, if a language for scientific computing doesn’t allow a disciplined AD implementation in 2019, prospects for that language are quite grim.