Seven Lines of Julia (examples sought)

It’s also super slow and wasteful

julia> @time @btime (F^(10^3))([1,1,2,3,5])
  660.172 μs (2001 allocations: 218.89 KiB)
 10.475864 seconds (29.82 M allocations: 3.087 GiB, 4.50% gc time, 0.24% compilation time)
3-element Vector{Float64}:

and doomed to fail miserably

julia> (F^(10^5))([1,1,2,3,5])

Bonus: if one is really concerned about the type piracy, then

(f::Function)↑(n::Integer) = x0 -> foldl((x, _) -> f(x), 1:n; init = x0)

is a suggestive alternative ( is \uparrow).

Edit: in the first code snippet, @time @btime is a typo. The intended code was just @time, to take into consideration also the compilation time, instead of @btime.


Try my version w/ ‘foldl’

Yes your version with foldl is perfectly fine!

1 Like

I like Pink noise. It’s is Gaussian noise with power spectral density 1/fᵅ. One can sample it with the inverse Fourier transform

# Make pink noise
using FFTW, Statistics
function pinknoise(m, n) 
    A = real(ifft((Complex{Float64}[i+j==2 ? 0. : (randn()+randn()*im)/sqrt((sin((i-1)*pi/m)^2 + sin((j-1)* pi/n)^2)) for i in 1:m, j in 1:n])))
A = pinknoise(500, 500)


It most spectacular property is that it is invariant under conformal mappings, functions that locally preserves angles, but not necessarily lengths, for example the map x + y*im -> ((x + y*im)/33)^2.

# Make conformal map
getindexfc(A, i, j) = A[mod1(ceil(Int, i), size(A,1)), mod1(ceil(Int, j), size(A,2))]
finv(x) = reim(sqrt(x[1] + x[2]*im)*33) # use inverse for look-up
cfmap(A) = map(CartesianIndices(size(A))) do I
     getindexfc(A, finv(I)...) 

This is how it acts on a grid matrix G:

[G  cfmap(G)]

And here with pink noise

[A  cfmap(A)]

PS: I took a bit extra effort to make it tiling, try [A A A; A A A; A A A]


What do you mean, “superslow”? You’re timing the runtime of @btime, that’s absurd.


Here’s the kind of thing I use a script language for most of the time, and the amount of effort it would take to get anything even similar in Matlab…

using Unitful, Latexify, Plots, UnitfulLatexify, UnitfulRecipes # Three packages + some glue...
default(fontfamily="Computer Modern")
W = rand(30)u"J"
v = sqrt.(2*W/3u"kg") .|> u"m/s"
plot(W, v; st=:scatter, legend=false, xguide="W", yguide="v", unitformat=latexify)
# I yield my remaining two lines as compensation for the extra package


(btw, I’m cheating a bit, had to ]dev Plots waiting for CompatHelper: bump compat for "Latexify" to "0.15" · JuliaPlots/Plots.jl@dbe9a2c · GitHub to be registered)


It’s not absurd. @btime measures only the execution time, not the compilation time. I’m using @time to demonstrate that the compilation time is huge. And you pay the price of recompilation every time you change n. Just try it. Using ComposedFunction for iterated functions is not the correct way to go.

You don’t have to believe me, try the following:

julia> using Statistics

julia> F(x) = [mean(x), prod(x)^(1/length(x)), median(x)]
F (generic function with 1 method)

julia> Fn(x, n) = foldl((y,_) -> F(y), 1:n; init=x)
Fn (generic function with 1 method)

julia> Base.:^(f::Function, n) = n==1 ? f : f ∘ f^(n-1)

julia> @time Fn([1.,1,2,3,5], 10^3)
  0.000168 seconds (2.00 k allocations: 219.078 KiB)
3-element Vector{Float64}:

julia> @time (F^(10^3))([1.,1,2,3,5])
  5.416531 seconds (3.22 M allocations: 222.397 MiB, 1.94% gc time, 98.99% compilation time)
3-element Vector{Float64}:

julia> @time (F^(10^4))([1.,1,2,3,5])
# will not complete in a long loooooong time
# and you cannot even interrupt with Ctrl-C
# use `pkill -9 julia` to kill it
1 Like

I believe @btime always execute the function a single time and then throw the time away, and then execute it a bunch times more to get a mean. So timing @btime will get this first execution (and if the function was not compiled it will therefore get compilation time) and then will take the time of an arbitrary number of runs that you do not know how many. So I have no idea why you are timing @btime.


Oh now I see, my mistake: I wanted to use @time instead of @btime, not in conjunction with. I’ll leave the error as it is.

But still, see my previous answer (which is now correct, and my original intention). The version with ^ is enormously slower due to the insane compilation. With F^(10^4) I didn’t even have the patience to let it finish. foldl (or a manual implementation) on the other hand is instantaneous to compile and faster to execute (around 4x on my PC). Hence, I see no good reason to do this wasteful fancy composition.

1 Like

This is straying from the brief since it’s all under the hood… but it does show off how simple it is to install packages and do fun things with it.

(@v1.6) pkg> add
julia> using VideoInTerminal
julia> showcam()

(please report any camera issues to VideoIO)


I said that it’s unsafe and type piracy, so obviously, one shouldn’t do it. I just showed it because it’s cool.


What exactly is unsafe about it?

The ^ notation is actually pretty cool, so I’m not much against the type piracy, but rather the implementation.

To reiterate (no pun intended), my main concern would be the efficiency, especially as regards compilation.

Notice how with your implementation ^ constructs a deeply nested object with a deeply nested type. Try dump(cos^10) to see what I’m referring to. After that, have a look at dump(cos^(10^3))… Basically, this is a wasteful representation which allocates 10^3 objects with pointers between them and to the same underlying cos function.

A better approach would then be to create a specific type to represent iterated functions. See for instance this gist.

1 Like

It doesn’t check for negative n and will go into an infinite loop.

Since this little example annoys you so much, I’ll just delete it.

Please don’t do it. It has its good and bad sides, but it is fine small snippet, and there are people who like it.


Please please please don’t do it! It doesn’t annoy me, I’m not trying to fight or anything like that. I just wanted to point out the efficiency aspect of it. It can be instructional to understand compile-time/run-time, creation of nested objects with complicated types, etc…

I hope you didn’t take any offence from my posts, and if so I apologize. I wish you a nice weekend :slight_smile:


PS: Anyone interested in camouflage and digital camouflage? I find this interesting from a statistical perspective (you want to match the spectrum of the background) and the mathematical, almost fractal properties of good camouflage.


I used to work with camouflage in cephalopods. It’s a relatively complex field with a lot of interesting opinions… I haven’t published anything recently in the field, and I’m sure a lot of new stuff came along since, but you’d need to catch up on things to make sure you’re not redoing stuff that’s already been published. If you like I can put you in touch with some of the experts in the field.

It’s a cool problem that involves:

  1. detection versus recognition (being “invisible” versus being unrecognizable)
  2. what can the camouflager output in terms of (as you put it) spectrum (how varied is your display) (spatially, chromatically, temporally, etc)
  3. the composition of the background you’ll be viewed on (again, spatially, chromatically, temporally)
  4. what can the visual systems you’re hiding from resolve (yet again, spatially, chromatically, temporally)?

You can see how this becomes very complicated very fast when you consider that everything changes with time of day, day of year, behavior (being motionless, mimicking leaves moving in the wind), and more. The whole distinction between being detected and being recognized makes everything a lot more complicated too, because you can be perfectly detectable (e.g. bright flamboyant colors) but utterly unrecognizable.


I see that here the biologist kicks in and thinks about a catalogue of everything that is out there in nature… I am quite aware of that, but I want to think a bit like a physicist here and impose stronger invariance and symmetry on the problem: e.g. statistical spatial stationarity, scale invariance, or conformal invariance as above. More on Zulip:


This isn’t anything out of this world, but it showcases Julia’s ability to represent expressions and to easily perform arbitrary precision arithmetic with GMP/MPFR.

Continued fractions:

function contfrac(x::Real, n::Integer)
    n < 1 && return Int[]
    fpart, ipart = modf(x)
    fpart == 0 ? [Int(ipart)] : [Int(ipart); contfrac(1/fpart, n-1)]

foldr((x, y) -> :($x + 1 / $y), contfrac(big(π), 25)) |> println
# 3 + 1 / (7 + 1 / (15 + 1 / (1 + 1 / (292 + 1 / (1 + 1 / (1 + 1 / (1 + 1 / (2 + 1 / (1 + 1 / (3 + 1 / (1 + 1 / (14 + 1 / (3 + 1 / (3 + 1 / (23 + 1 / (1 + 1 / (1 + 1 / (7 + 1 / (4 + 1 / (35 + 1 / (1 + 1 / (1 + 1 / (1 + 1 / 2)))))))))))))))))))))))

foldr((x, y) -> x + 1 // big(y), contfrac(big(π), 75))
# 13926567982265799805873939967043853503//4432964269365850039583559427823398144

If you put the resulting string

foldr((x,y) -> "$x+\\frac1{$y}", contfrac(big(π), 15)) |> println
# 3+\frac1{7+\frac1{15+...}}

in a LaTeX cell you get


Of course in general an iterative implementation such as

function contfrac(x::Real, n::Integer)
    cf = Int[]
    for _ in 1:n
        fpart, ipart = modf(x)
        push!(cf, Int(ipart))
        x = 1 / fpart

would be preferred, because it reduces allocations and avoids stack overflows. This was however a bit too long for the seven lines restriction. Notice that in the case of continued fractions, to actually overflow the stack while performing meaningful computations requires working with a quite large precision. The default precision 256 of BigFloat allows to correctly compute only the first 76 terms of the continued fraction of π.


Well you could also use a reduce operation :slight_smile:

contfrac(x, n) = foldl(1:n, init=(Int[], x)) do (l, r), _
       f, i = modf(r)
       return [l; Int(i)], 1/f
end |> first

julia> contfrac(π, 5)
5-element Vector{Int64}:

This could go on one line too (sort of):

contfrac(x, n) = foldl(((l, r), _) -> let (f, i) = modf(r); ([l; Int(i)], 1/f) end, 1:n, init=(Int[], x))[1]

julia> contfrac(π, 5)
5-element Vector{Int64}: