It really just depends on the number of partials since complex step will repeat the primal.
I’m (100ε )% shure that this Animation can be done better, but have to got to bad now:
j=0;i=0; d=2; while true ; sleep(0.05);global i=i +0.02;global d; global j = j+1; if floor(i)%d==1 print("\e[4D\e[K")end; if floor(i)%d==0 print("\e[K") end ; print(Char((128512+j%128))); if i%10==0 d= rand(2:5) end ; end
tomorrow i will write ForwardDiff
as an one liner . . . not
My favorute:
julia> fib(n) = (BigInt.([1 1; 1 0])^(n1))[1]
fib (generic function with 1 method)
julia> fib.([1:10;])
10element Array{BigInt,1}:
1
1
2
3
5
8
13
21
34
55
julia> fib(100)
354224848179261915075
How about
rbinomial( p, size... ) = rand( size... ) .< p
?
You mean like this one liner?
my_exp(x; n=1e10) = (1 + 1/n)^(n*x)
One of the first packages I made is Fatou.jl which is intended to provide a oneliner interface for fractals
but I haven’t seen anyone post their own version Fatou oneliner fractals anywhere yet
Precisely! This is a nice example, as it exhibits all the problems I mentioned above:

it is absolutely unnecessary, as we have
exp
, 
it is about 1000x slower:this is incorrect, thanks @DNFjulia> x = 12 12 julia> @btime exp($x); 0.027 ns (0 allocations: 0 bytes) julia> @btime my_exp($x); 19.922 ns (0 allocations: 0 bytes)

it is of course inaccurate:
julia> my_exp(x) / exp(x) 1.0000009922849442
Actually, I’m not sure about that. I don’t know if I’m using this correctly, but I believe you may have to force the computation to happen during runtime, instead of letting the compiler figure out the answer:
julia> @btime exp(Ref($x)[]);
28.093 ns (0 allocations: 0 bytes)
julia> @btime my_exp(Ref($x)[]);
23.459 ns (0 allocations: 0 bytes)
I don’t think I did that quite right, but if you look at the broadcasted version:
julia> x = 100 * rand(1000);
julia> @btime exp.($x);
9.457 μs (1 allocation: 7.94 KiB)
julia> @btime my_exp.($x);
22.152 μs (1 allocation: 7.94 KiB)
it’s a lot closer than 1000x.
Is the benchmarking Ref
trick documented anywhere? I couldn’t find it.
Good point. Subns timings should have been a warning. Cf
https://github.com/JuliaCI/BenchmarkTools.jl/issues/130
Yeah, that’s a good idea. But I frequently see this Ref($x)[]
(or something similar) trick. Could that be automated? Or maybe it should just be explained in the manual? I cannot find it there.
I
I am glad that you liked my example! Hopefully I wont derail the conversation again, but I believe that we have stumbled onto something simultaneously profound and silly at the same time.
A lot of people can be seduced by elegance. Here we see an example where one line of code can be used to approximate one of the most important/beautiful functions in mathematics. The approximation directly uses the definition
so the approximation is highly elegant. Furthermore, the approximation can get you 6 digits of accuracy in mere nanoseconds! (As long as you are using a fast language like Julia.)
Although, as you pointed out the elegance conceals just how impractical the one liner is. I would not want to see anybody using this in production code.
Also, as a side note I have realized that:
is a slightly better approximation and would also work as a one liner.
AFAIK Pade approximants (with some judicious scaling and squaring, if necessary) dominate this, but I may not remember correctly. As far as I am aware, actual implementations are tabledriven, eg
Tang, PingTak Peter. “Tabledriven implementation of the exponential function in IEEE floatingpoint arithmetic.” ACM Transactions on Mathematical Software (TOMS) 15.2 (1989): 144157.
@chakravala that is awesome!
Bound an angle to be between 0 and 2pi (or tau, hey can we get tau as a constant?)
Wrap(x) = ( (x % 2pi) + 2pi ) % 2pi
Hamilton Product
HamiltonProduct(a, b) = [ a[1]*b[1]  a[2]*b[2]  a[3]*b[3]  a[4]*b[4],
a[1]*b[2] + a[2]*b[1] + a[3]*b[4]  a[4]*b[3],
a[1]*b[3]  a[2]*b[4] + a[3]*b[1] + a[4]*b[2],
a[1]*b[4] + a[2]*b[3]  a[3]*b[2] + a[4]*b[1]
]
In fact, I have a PR that does just that, but I don’t think it’ll be accepted. https://github.com/JuliaCI/BenchmarkTools.jl/pull/140
For those curious, the correct thing to do is generally $(Ref(x))[]
instead of Ref($x)[]
, since you don’t want to accidentally measure the time it takes to apply Ref
to x
.
I think I’ll just continue to define Tau myself rather then import this package. Maybe there’s some elaborate argument why it should be “tau ~ 2pi” but for my uses, it’s just 2pi.
tau = 2pi
Cool to know about mod2pi. Didn’t know it was in the base library, because I had no idea there was a name for this function, the code for it is well, interesting: https://github.com/JuliaLang/julia/blob/c6da87ff4bc7a855e217856757ad3413cf6d1f79/base/math.jl#L801L987
Guess this is why I’d never call myself a software engineer!
Here’s another reason why I love Julia:
"""Convenience function for performing Redheffer Star Products."""
⋆( A, B ) = RedhefferStarProduct( A, B )
The above function won’t run on your machine  but, you can define unicode operators for shorthand! So if I wanna Redheffer star some stuff
F=A⋆B⋆C⋆D⋆E
Here is a useful (maybe not so fun) oneliner that I use everywhere. Just put the following line after module MyPackage
so that ?MyPackage
shows README
@doc read(joinpath(dirname(@__DIR__), "README.md"), String) MyPackage
(It’s based on code I saw somewhere. Maybe it was @mbauman?)
There is also a multiline version if you want to run doctest on README:
@doc let path = joinpath(dirname(@__DIR__), "README.md")
include_dependency(path)
replace(read(path, String), "```julia" => "```jldoctest")
end MyPackage
Simulate the frog problem shown recently on standupmaths with one line and
frog(n,cur_lily,steps) = (cur_lily == n) ? steps : frog(n,rand(cur_lily+1:n),steps+1)
estimate the mean number of steps needed to jump a 10 step wide river with another
mean(frog(10,0,0) for _ in 1:1_000_000)
a=[1:3;] # equal to collect(1:3)
single line grep:
grep(file, pattern) = collect(Iterators.filter(x > occursin(pattern, x), eachline(file)))