how can i implement this summation using reduce,sum,mapreduce, array comprehension and generators
Here is a solution with
sum and a generator:
julia> my_log(x, N) = 2*sum(((x-1)/(x+1))^(2n-1) / (2n-1) for n in 1:N) my_log (generic function with 1 method) julia> my_log(42., 100) 3.737662635661749 julia> log(42.) 3.7376696182833684
EDIT: I should mention that, like @tbeason, I don’t think you should use such techniques if you really want to compute ln(x). This is only meant to serve as an example of how to sum a series of terms using a generator.
Really depends on the purpose…my first answer would be to use
log(x) because you can see it is equivalent!
I’m also not certain why you listed all of those methods. If you want to implement the sum, just do it. Write exactly what you see in the equation, but use a
for loop and a temporary variable to accumulate the sum. Note that this naive method is not a good approach to solving the actual problem. 1.) you can’t go to infinity 2.) it suffers some computational problems.
The actual implementation of functions like
log(x) is rather complex and there are multiple ways to do it depending on if you want extreme precision or extreme speed.
the reason why i need all the methods is to compare the various speed and memory allocation. thanks
In that case, here is a comparison between a few implementations:
julia> using BenchmarkTools # sum + comprehension -> allocates julia> log0(x, N) = 2*sum([((x-1)/(x+1))^(2n-1) / (2n-1) for n in 1:N]); julia> @btime log0(42, 1000) 22.461 μs (1 allocation: 7.94 KiB) 3.737669618283369
# sum + generator julia> log1(x, N) = 2*sum(((x-1)/(x+1))^(2n-1) / (2n-1) for n in 1:N); julia> @btime log1(42, 1000) 23.045 μs (0 allocations: 0 bytes) 3.737669618283366
# reduce instead of sum julia> log2(x, N) = 2*reduce(+, ((x-1)/(x+1))^(2n-1) / (2n-1) for n in 1:N); julia> @btime log2(42, 1000) 23.054 μs (0 allocations: 0 bytes) 3.737669618283366
# map instead of generator julia> log3(x, N) = 2*mapreduce(+, 1:N) do n ((x-1)/(x+1))^(2n-1) / (2n-1) end; julia> @btime log3(42, 1000) 21.932 μs (0 allocations: 0 bytes) 3.7376696182833693
# Transducers julia> using Transducers julia> log4(x, N) = 2*foldl(+, Map(n->((x-1)/(x+1))^(2n-1) / (2n-1)), 1:N); julia> @btime log4(42, 1000) 22.244 μs (0 allocations: 0 bytes) 3.737669618283366
# plain loop julia> log5(x, N) = let acc = 0. @simd for n in 1:N acc += ((x-1)/(x+1))^(2n-1) / (2n-1) end 2*acc end julia> @btime log5(42, 1000) 21.609 μs (0 allocations: 0 bytes) 3.737669618283369
- apart from the one using an array comprehension, none of these implementations allocate
- the result is not always the same because of floating-point issues and since all implementations don’t perform the computations in the same order
- performancewise, all implementations are in the same ballpark, with a slight advantage for a plain
mapreduceon my machine.
A central trick is to avoid the increasing cost of calculating
(x-1)/(x+1))^(2n-1) which becomes more expensive for each iteration. You can avoid this with
function mylog(x, N) x1 = (x - 1) / (x + 1) x2 = x1^2 s, n = x1, 1 for i in 1:N-1 x1 *= x2 n += 2 s += x1 / n end return 2*s end
julia> @btime log5(42, 1000) 24.793 μs (0 allocations: 0 bytes) 3.737669618283369 julia> @btime mylog(42, 1000) 4.014 μs (0 allocations: 0 bytes) 3.737669618283366
How does this compare to the builtin log function?
When I try to time it, I get a time << 1ns.
Edit: I had an error both in my timing and my explanation for it, see posts below for the correct timings / explanations.
The correct way to benchmark that is like this:
julia> using BenchmarkTools julia> x = 42.0 42.0 julia> @btime log($(Ref(x))) 4.791 ns (0 allocations: 0 bytes) 3.7376696182833684
It looked too good to be true, thanks!
Anyhow, the correct timing for
log(x) is still a factor of 1000 faster than the manual sum implementation.
Is this a homework problem?
julia> x = 42.; julia> @btime log($x) 7.306 ns (0 allocations: 0 bytes) 3.7376696182833684
which gives the same results. For comparison:
julia> @btime log($(Ref(x))) 7.448 ns (0 allocations: 0 bytes) 3.7376696182833684
It’s a bit more complicated than that, since the logarithm does not depend on the exponent only: the mantissa also plays a role. However, mathematical libraries typically have optimized (piecewise) approximations for the logarithm, based on low-order polynomial or rational fractions. This helps keep the number of operations low, while guaranteeing tight error bounds.
In the case of the log, we can see that the number of arithmetic operations performed is really not that high, around 20 flop for one log evaluation (and some integer operations which are not counted here):
julia> using GFlops julia> @count_ops log(42.) Flop Counter: add32: 0 sub32: 0 mul32: 0 div32: 0 add64: 7 sub64: 2 mul64: 7 div64: 1 sqrt32: 0 sqrt64: 0
Thanks for the explanation!