What do you mean, “superslow”? You’re timing the runtime of @btime
, that’s absurd.
Here’s the kind of thing I use a script language for most of the time, and the amount of effort it would take to get anything even similar in Matlab…
using Unitful, Latexify, Plots, UnitfulLatexify, UnitfulRecipes # Three packages + some glue...
default(fontfamily="Computer Modern")
W = rand(30)u"J"
v = sqrt.(2*W/3u"kg") .|> u"m/s"
plot(W, v; st=:scatter, legend=false, xguide="W", yguide="v", unitformat=latexify)
# I yield my remaining two lines as compensation for the extra package
(btw, I’m cheating a bit, had to ]dev Plots
waiting for CompatHelper: bump compat for "Latexify" to "0.15" · JuliaPlots/Plots.jl@dbe9a2c · GitHub to be registered)
It’s not absurd. @btime
measures only the execution time, not the compilation time. I’m using @time
to demonstrate that the compilation time is huge. And you pay the price of recompilation every time you change n
. Just try it. Using ComposedFunction
for iterated functions is not the correct way to go.
You don’t have to believe me, try the following:
julia> using Statistics
julia> F(x) = [mean(x), prod(x)^(1/length(x)), median(x)]
F (generic function with 1 method)
julia> Fn(x, n) = foldl((y,_) -> F(y), 1:n; init=x)
Fn (generic function with 1 method)
julia> Base.:^(f::Function, n) = n==1 ? f : f ∘ f^(n-1)
julia> @time Fn([1.,1,2,3,5], 10^3)
0.000168 seconds (2.00 k allocations: 219.078 KiB)
3-element Vector{Float64}:
2.089057949736859
2.089057949736859
2.089057949736859
julia> @time (F^(10^3))([1.,1,2,3,5])
5.416531 seconds (3.22 M allocations: 222.397 MiB, 1.94% gc time, 98.99% compilation time)
3-element Vector{Float64}:
2.089057949736859
2.089057949736859
2.089057949736859
julia> @time (F^(10^4))([1.,1,2,3,5])
# will not complete in a long loooooong time
# and you cannot even interrupt with Ctrl-C
# use `pkill -9 julia` to kill it
I believe @btime
always execute the function a single time and then throw the time away, and then execute it a bunch times more to get a mean. So timing @btime
will get this first execution (and if the function was not compiled it will therefore get compilation time) and then will take the time of an arbitrary number of runs that you do not know how many. So I have no idea why you are timing @btime
.
Oh now I see, my mistake: I wanted to use @time
instead of @btime
, not in conjunction with. I’ll leave the error as it is.
But still, see my previous answer (which is now correct, and my original intention). The version with ^
is enormously slower due to the insane compilation. With F^(10^4)
I didn’t even have the patience to let it finish. foldl
(or a manual implementation) on the other hand is instantaneous to compile and faster to execute (around 4x on my PC). Hence, I see no good reason to do this wasteful fancy composition.
This is straying from the brief since it’s all under the hood… but it does show off how simple it is to install packages and do fun things with it.
(@v1.6) pkg> add https://github.com/IanButterworth/VideoInTerminal.jl
julia> using VideoInTerminal
julia> showcam()
(please report any camera issues to VideoIO)
I said that it’s unsafe and type piracy, so obviously, one shouldn’t do it. I just showed it because it’s cool.
What exactly is unsafe about it?
The ^
notation is actually pretty cool, so I’m not much against the type piracy, but rather the implementation.
To reiterate (no pun intended), my main concern would be the efficiency, especially as regards compilation.
Notice how with your implementation ^
constructs a deeply nested object with a deeply nested type. Try dump(cos^10)
to see what I’m referring to. After that, have a look at dump(cos^(10^3))
… Basically, this is a wasteful representation which allocates 10^3 objects with pointers between them and to the same underlying cos
function.
A better approach would then be to create a specific type to represent iterated functions. See for instance this gist.
It doesn’t check for negative n
and will go into an infinite loop.
Since this little example annoys you so much, I’ll just delete it.
Please don’t do it. It has its good and bad sides, but it is fine small snippet, and there are people who like it.
Please please please don’t do it! It doesn’t annoy me, I’m not trying to fight or anything like that. I just wanted to point out the efficiency aspect of it. It can be instructional to understand compile-time/run-time, creation of nested objects with complicated types, etc…
I hope you didn’t take any offence from my posts, and if so I apologize. I wish you a nice weekend
PS: Anyone interested in camouflage and digital camouflage? I find this interesting from a statistical perspective (you want to match the spectrum of the background) and the mathematical, almost fractal properties of good camouflage.
I used to work with camouflage in cephalopods. It’s a relatively complex field with a lot of interesting opinions… I haven’t published anything recently in the field, and I’m sure a lot of new stuff came along since, but you’d need to catch up on things to make sure you’re not redoing stuff that’s already been published. If you like I can put you in touch with some of the experts in the field.
It’s a cool problem that involves:
- detection versus recognition (being “invisible” versus being unrecognizable)
- what can the camouflager output in terms of (as you put it) spectrum (how varied is your display) (spatially, chromatically, temporally, etc)
- the composition of the background you’ll be viewed on (again, spatially, chromatically, temporally)
- what can the visual systems you’re hiding from resolve (yet again, spatially, chromatically, temporally)?
You can see how this becomes very complicated very fast when you consider that everything changes with time of day, day of year, behavior (being motionless, mimicking leaves moving in the wind), and more. The whole distinction between being detected and being recognized makes everything a lot more complicated too, because you can be perfectly detectable (e.g. bright flamboyant colors) but utterly unrecognizable.
I see that here the biologist kicks in and thinks about a catalogue of everything that is out there in nature… I am quite aware of that, but I want to think a bit like a physicist here and impose stronger invariance and symmetry on the problem: e.g. statistical spatial stationarity, scale invariance, or conformal invariance as above. More on Zulip: Zulip
This isn’t anything out of this world, but it showcases Julia’s ability to represent expressions and to easily perform arbitrary precision arithmetic with GMP/MPFR.
Continued fractions:
function contfrac(x::Real, n::Integer)
n < 1 && return Int[]
fpart, ipart = modf(x)
fpart == 0 ? [Int(ipart)] : [Int(ipart); contfrac(1/fpart, n-1)]
end
foldr((x, y) -> :($x + 1 / $y), contfrac(big(π), 25)) |> println
# 3 + 1 / (7 + 1 / (15 + 1 / (1 + 1 / (292 + 1 / (1 + 1 / (1 + 1 / (1 + 1 / (2 + 1 / (1 + 1 / (3 + 1 / (1 + 1 / (14 + 1 / (3 + 1 / (3 + 1 / (23 + 1 / (1 + 1 / (1 + 1 / (7 + 1 / (4 + 1 / (35 + 1 / (1 + 1 / (1 + 1 / (1 + 1 / 2)))))))))))))))))))))))
foldr((x, y) -> x + 1 // big(y), contfrac(big(π), 75))
# 13926567982265799805873939967043853503//4432964269365850039583559427823398144
If you put the resulting string
foldr((x,y) -> "$x+\\frac1{$y}", contfrac(big(π), 15)) |> println
# 3+\frac1{7+\frac1{15+...}}
in a LaTeX cell you get
Addendum
Of course in general an iterative implementation such as
function contfrac(x::Real, n::Integer)
cf = Int[]
for _ in 1:n
fpart, ipart = modf(x)
push!(cf, Int(ipart))
x = 1 / fpart
end
cf
end
would be preferred, because it reduces allocations and avoids stack overflows. This was however a bit too long for the seven lines restriction. Notice that in the case of continued fractions, to actually overflow the stack while performing meaningful computations requires working with a quite large precision. The default precision 256 of BigFloat
allows to correctly compute only the first 76 terms of the continued fraction of π
.
Well you could also use a reduce operation
contfrac(x, n) = foldl(1:n, init=(Int[], x)) do (l, r), _
f, i = modf(r)
return [l; Int(i)], 1/f
end |> first
julia> contfrac(π, 5)
5-element Vector{Int64}:
3
7
15
1
292
This could go on one line too (sort of):
contfrac(x, n) = foldl(((l, r), _) -> let (f, i) = modf(r); ([l; Int(i)], 1/f) end, 1:n, init=(Int[], x))[1]
julia> contfrac(π, 5)
5-element Vector{Int64}:
3
7
15
1
292
Very nice Back in the day when I needed continued fractions, I also added \struct
s to the TeX code, so that the numbers didn’t keep getting smaller.
Edit: It was very politely pointed out to me that it’s de rigeur in this thread to provide code. Although it’s not only Julia that lets you do this, I really like the swap semantics in Julia. The following is code that computes a random permutation of an input vector in six lines:
function random_permutation(a)
for m = 1:length(a)-1
l = m+rand(0:length(a)-m)
a[l],a[m]=a[m],a[l]
end
end
As part of a project where I’m porting some FORTRAN (FORTRAN IV!) code to Julia, I really like that Julia is not only much more succinct but also much more general.
Dilbert comic strips do not need introduction:
using Gumbo, HTTP, FileIO, Dates, Plots; gr()
function dilbert(t::Date = today() - Day(1))
r = HTTP.get("https://dilbert.com/strip/"*string(t))
r_parsed = parsehtml(String(r.body))
webaddress = r_parsed.root[1][26].attributes["content"]
download(webaddress) |> load |> x -> display(plot(x,dpi=300, ticks=nothing))
end
dilbert(s::String) = let t = rand(Date(1990,1,1):Day(1):today() - Day(1)); dilbert(t); println(t) end
Then call: dilbert()
Or: dilbert(Date(2021,06,26))
Or better, do repeat calls to: dilbert("rand")
NB: edited random function, thanks to @Syx_Pek
I recommend the following improvement to the last line.
dilbert(s::String) = let t = rand(Date(1990,1,1):Day(1):today() - Day(1)); dilbert(t); println(t) end
Kuramoto model (ten lines)
Kuramoto model is a mathematical model of synchronization. Consider oscillators with random periods. (In tems of the following code, the random periods are denoted by 2\pi/\omega[i] and \omega[i]'s are generated by Normal(1, 2).) The model describes the synchronization of the oscillators:
-
If the strength K of synchronization is less than a certain critical value K_c, then no synchronization will occur.
-
Synchronization begins when the strength K of synchronization exceeds the ctitical value K_c.
The critical value K_c is determined by the probability distribution dist of \omega[i]'s: K_c = π/2/pdf(dist, mean(dist)).
The video below looks like fireflies are synchronizing their rhythmic flashes.
a = 1.3; tmax = 25.0; using Distributions, DifferentialEquations, Plots
kuramoto!(dθ, θ, param, t) = (N = length(θ); (K, ω) = param;
for i in 1:N dθ[i] = ω[i] + K*mean(sin(θ[j] - θ[i]) for j in 1:N) end)
(m, n) = (32, 16); (d, v) = (Normal(0, 2), 1.0); K_c = 2/π/pdf(d, 0)
θ₀ = 2π*rand(m, n); tspan = (0.0, tmax); K = a*K_c; ω = rand(d, m, n) .+ v
sol = solve(ODEProblem(kuramoto!, θ₀, tspan, (K, ω)))
anim = @animate for t in [fill(0, 20); 0:0.1:tmax; fill(tmax, 20)]
plot(size=(400, 220), title="Kuramoto model K = $(a)K_c: t = $t", titlefontsize=10)
heatmap!(sin.(sol(t))'; c=:bwr, colorbar=false, frame=false, ticks=false)
end; gif(anim, "kuramoto$(a).gif")
Result
For the readability, expand the above compressed code, in an appropriate environment, by
quote
a = 1.3; tmax = 25.0; using Distributions, DifferentialEquations, Plots
kuramoto!(dθ, θ, param, t) = (N = length(θ); (K, ω) = param;
for i in 1:N dθ[i] = ω[i] + K*mean(sin(θ[j] - θ[i]) for j in 1:N) end)
(m, n) = (32, 16); (d, v) = (Normal(0, 2), 1.0); K_c = 2/π/pdf(d, 0)
θ₀ = 2π*rand(m, n); tspan = (0.0, tmax); K = a*K_c; ω = rand(d, m, n) .+ v
sol = solve(ODEProblem(kuramoto!, θ₀, tspan, (K, ω)))
anim = @animate for t in [fill(0, 20); 0:0.1:tmax; fill(tmax, 20)]
plot(size=(400, 220), title="Kuramoto model K = $(a)K_c: t = $t", titlefontsize=10)
heatmap!(sin.(sol(t))'; c=:bwr, colorbar=false, frame=false, ticks=false)
end; gif(anim, "kuramoto$(a).gif")
end |> Base.remove_linenums! |> x -> display("text/markdown", "```julia\n$x\n```")
Expanded Code
begin
a = 1.3
tmax = 25.0
using Distributions, DifferentialEquations, Plots
kuramoto!(dθ, θ, param, t) = begin
N = length(θ)
(K, ω) = param
for i = 1:N
dθ[i] = ω[i] + K * mean((sin(θ[j] - θ[i]) for j = 1:N))
end
end
(m, n) = (32, 16)
(d, v) = (Normal(0, 2), 1.0)
K_c = (2 / π) / pdf(d, 0)
θ₀ = (2π) * rand(m, n)
tspan = (0.0, tmax)
K = a * K_c
ω = rand(d, m, n) .+ v
sol = solve(ODEProblem(kuramoto!, θ₀, tspan, (K, ω)))
anim = @animate(for t = [fill(0, 20); 0:0.1:tmax; fill(tmax, 20)]
plot(size = (400, 220), title = "Kuramoto model K = $(a)K_c: t = $(t)", titlefontsize = 10)
heatmap!((sin.(sol(t)))'; c = :bwr, colorbar = false, frame = false, ticks = false)
end)
gif(anim, "kuramoto$(a).gif")
end