# Looping through binary numbers

Hey,

In Copulas.jl, I want to compute the measure a copula assignes to a hyperbox [u,v].

Basically, the following function is doing what i want :

using Copulas

# Based on Computing the {{Volume}} of {\emph{n}} -{{Dimensional Copulas}}, Cherubini & Romagnoli 2009

function measure(C::CT, u,v) where CT<: Copulas.Copula

# u and v should both have the same length as C;
# u[i] < v[i] for all i is also assumed.
for i in eachindex(u)
if u[i] >= v[i]
return 0.0
end
end

# We need to compute the value of the cdf at each corner of the hypercube [u,v]
eval_pt = similar(u)
d = length(C)
dgts = zeros(Int64,d)
r = zero(eltype(u))

for i in 1:2^d
k_i = 0
for k in 1:d
if dgts[k] == 1
eval_pt[k] = u[k]
else
eval_pt[k] = v[k]
k_i += 1
end
end
r += (-1)^k_i * Copulas.cdf(C,eval_pt)
end
return r
end

C= ClaytonCopula(4,7.0)
u = zeros(4).+0.25
v = ones(4).-0.25
measure(C,u,v)


But it is quite slow. The function has to loop through all binary sequences of length d, corresponding to the numbers i in the loops. I have a feeling the allocation of dgts is not needed and that the loops can be made smarters. Basically the test if dgts[k] == 1 could be transform on a test directly on i and k but i cannot find the right expression.

Thoughts ?

You can just check i & 1<<k != 0

1 Like

I really like black magic

Edit : Does not work, it returns a negative value on my test case. This is probably the right direction, but maybe not exactly the right black spell.

1 << k shifts 1 left k places, in binary. i & (1 << k) returns a non-zero value if the kth bit in the number is set, but starting at 0! The expression is not quite ready to drop into your code: I suspect that it should be

       for k in 0:d-1
if (i - 1) & (1 << (k - 1)) != 0
eval_pt[k] = u[k]
else
eval_pt[k] = v[k]
k_i += 1
end

4 Likes

Edit : Yes this almost worked, i just replaces a k-1 by a k. Thanks to both of you !

using Copulas

# Based on Computing the {{Volume}} of {\emph{n}} -{{Dimensional Copulas}}, Cherubini & Romagnoli 2009

function measure(C::CT, u,v) where CT<: Copulas.Copula
# Computes the value of the cdf at each corner of the hypercube [u,v]
# To obtain the C-volume of the box.
# This assumes u[i] < v[i] for all i.

eval_pt = similar(u)
d = length(C)
r = zero(eltype(u))

for i in 1:2^d
parity = 0
for k in 0:(d-1)
if (i - 1) & (1 << k) != 0
eval_pt[k+1] = u[k+1]
else
eval_pt[k+1] = v[k+1]
parity += 1
end
end
r += (-1)^parity * Copulas.cdf(C,eval_pt)
end
return r
end

C= ClaytonCopula(4,7.0)
u = zeros(4).+0.25
v = ones(4).-0.25
measure(C,u,v)

using BenchmarkTools
@btime measure(C,u,v) setup=(u = rand(d),v = rand(d))


I wonder about efficiency thoughâ€¦ Passing from one eval_pts to the next might not require assigning all of its values, maybe we could do something smarter.

Note that another possibility is to use a gray code, which eliminates the inner loop (by flipping only one component at a time, giving the same set of points in a different order). Compare:

# your original ordering
function evalpts(u,v)
eval_pt = similar(u)
(d = length(u)) == length(v) || throw(DimensionMismatch())
evalpts = typeof(eval_pt)[]

for i in 1:2^d
parity = 0
for k in 0:(d-1)
if (i - 1) & (1 << k) != 0
eval_pt[k+1] = u[k+1]
else
eval_pt[k+1] = v[k+1]
parity += 1
end
end
push!(evalpts, copy(eval_pt))
end
return evalpts
end

# Gray-code ordering:
function evalpts_gray(u,v)
eval_pt = copy(u)
(d = length(u)) == length(v) || throw(DimensionMismatch())
evalpts = typeof(eval_pt)[]

# use a gray code to flip one element at a time
graycode = 0
which = fill(false, d) # false/true to use u/v for each component
push!(evalpts, copy(eval_pt))
for s = 1:(1<<d)-1
graycodeâ€˛ = s âŠ» (s >> 1)
graycomp = trailing_zeros(graycode âŠ» graycodeâ€˛) + 1
graycode = graycodeâ€˛
eval_pt[graycomp] = (which[graycomp] = !which[graycomp]) ? v[graycomp] : u[graycomp]
push!(evalpts, copy(eval_pt))
end
return evalpts
end


which gives:

julia> evalpts([1,2,3],[4,5,6])
8-element Vector{Vector{Int64}}:
[4, 5, 6]
[1, 5, 6]
[4, 2, 6]
[1, 2, 6]
[4, 5, 3]
[1, 5, 3]
[4, 2, 3]
[1, 2, 3]

julia> evalpts_gray([1,2,3],[4,5,6])
8-element Vector{Vector{Int64}}:
[1, 2, 3]
[4, 2, 3]
[4, 5, 3]
[1, 5, 3]
[1, 5, 6]
[4, 5, 6]
[4, 2, 6]
[1, 2, 6]

1 Like

Thanks @stevengj, this is clearly an improvement when d will become large. However, I have one question: graycode and graycode' seems to be two different variables, both integers, and not the transpose one of each other, right ?

Almost right â€” they are actually graycode and graycodeâ€˛ (graycode\prime[TAB]), not graycode', if you look closely. (Primes, via tab completing \prime or \pprime or \ppprime or â€¦, are valid identifier characters in Julia.)

(In practice, the improvement might be minimal, because probably your evaluation Copulas.cdf(C,eval_pt) has to look at all of the components anyway. Youâ€™d get a bigger improvement if you could somehow update the calculation using only the component that has changed from the previous iteration.)

This \prime Unicode is so close to the standard ' that it becomes really confusing. So these are indeed different identifiers ? I suggest to never do this and name it graycode2

Since this is the generic version for any copula, then no we cannot use this fact to make the cdf computation update itself. But for some copulas, this might be the case so future implementations are still possible via dispatch.
Maybe we will need some interface but for the moment this is not really needed.

(If you search the Julia source code, youâ€™ll find that fooâ€˛ is quite popular and idiomatic as a name for â€śvariant of fooâ€ť, including in julia itself.)

2 Likes

Gray codes are cool, but Generators are cooler, as storing the evaluation points is annoying:

u,v = [1,2,3],[4,5,6]

evalpts(u,v) = Iterators.product(zip(u,v)...)


Giving:

julia> evalpts(u,v) |> collect |> vec
8-element Vector{Tuple{Int64, Int64, Int64}}:
(1, 2, 3)
(4, 2, 3)
(1, 5, 3)
(4, 5, 3)
(1, 2, 6)
(4, 2, 6)
(1, 5, 6)
(4, 5, 6)


As for the rest of the calculationâ€¦ having a look now.

2 Likes

In his original application @lmv doesnâ€™t store the evaluation points, or even store them as separate objects â€” they just call Copulas.cdf(C,eval_pt) for each point. I only stored the results in an array for illustration purposes.

Iterators.product(zip(u,v)...) is quite compact and pretty, though it might be less efficient since the zip length being splatted is not known until runtime, making the results non type-stable.

Continuing with the Generatorification of the calculation:

evalpts(u,v) = zip(
Iterators.map(x->(-1)^isodd(count_ones(x)),   # parity
Iterators.countfrom(0)),Iterators.product(zip(u,v)...)   # eval point
)


Giving:

julia> evalpts(u,v) |> collect
8-element Vector{Tuple{Int64, Tuple{Int64, Int64, Int64}}}:
(1, (1, 2, 3))
(-1, (4, 2, 3))
(-1, (1, 5, 3))
(1, (4, 5, 3))
(-1, (1, 2, 6))
(1, (4, 2, 6))
(1, (1, 5, 6))
(-1, (4, 5, 6))


Parity now included!

UPDATE:
And now we can finish the calculation with:

measure(C, u, v) = sum(x -> x[1] * Copulas.cdf(C, x[2]), evalpts(u,v))


and for the example in the OP:

julia> measure(C, u, v)
0.33362237848930043


As for efficiency, less allocation should be good. But better get benchmarked by OP with original function (and compared for correctness)
Definitely more efficient in keyboard typing

1 Like

This is pretty indeed. How about the non-stability due to the zip then @stevengj There is the possibility that the dimension d is known at compile time (as it is a type parameter of many copulas right now and could become a type parameter of the abstract type Copula itself. Would that help ?

The Zip iterator is indeed unstable, but with SVectors it is stable.
So:

using SVectors
u = @SVector rand(4)
v = @SVector rand(4)
@btime measure($C,$u, $v)  gives: julia> @btime measure($C, $u,$v)
1.704 ÎĽs (0 allocations: 0 bytes)
0.0014735242727759758


Zero allocations! Needless to say, this is way faster.

1 Like

Yes, that is what i thought. Then maybe there is the possibility to internally convert to static arrays in the library since the size is statically known from the copula object. Iâ€™ll think about it.

Thanks a lot !

Actually, it isnâ€™t so much faster (around 2x). So benchmark before deciding if this optimization is essential.
Just to recap, before it is burried in thread:

evalpts(u,v) = zip(
Iterators.map(x->(-1)^isodd(count_ones(x)),Iterators.countfrom(0)),
Iterators.product(zip(u,v)...)   # eval point
)
measure(C, u, v) = sum(x -> x[1] * Copulas.cdf(C, x[2]), evalpts(u,v))


is the close-to-1-line version of OP measure.