 # Opportunity to write some idiomatic Julia code

I want to get into Haskell - would you like to make a Haskell-attempt on this?

Also what is the “Some” in "Some(“fumble”) do?

Damn, how do I blockquote correctly here (with the person I reply to included)?

I’ll have a look but I am currently overloaded with other stuff Some resembles an “option type”. Such things are quite usual in functional languages to model data which might or might not be there/returned. This way you can write fluid and type safe code (chaining functions into each other) without any explicit error checking layers.

1 Like

I implemented the probabalistic implementation, including the modifier being a geometric distribution, here: Opportunity to write some idiomatic Julia code . The question of whether a “fumble” or “critical” happening is formulated as the chance of two heads being flipped in a row in a biased coin (1/3 for heads).

Since that, I reworked it to be a little smaller here: Opportunity to write some idiomatic Julia code

Using an Absorbing Markov Chain as its own type might be far more idiomatic, though

1 Like

Just select the text you want to quote, and click the “Quote” button that pops up. See the “quoting” section in these discourse tips.

1 Like

Merged. Thanks.

I am impressed by the Julia community here. Thanks for the learning opportunity.

6 Likes

I am not sure, that’s just brute-forcing the problem. I did not find a derivation of the formula you use here, and currently I don’t have the bandwidth to play around with combinatorics (as much as I love to), but if there is a neat closed form, then the whole thing can be done like this:

1. first draw: branch on 1, 6, or anything else
2. 1 and 6 are symmetric, so only one of them needs to be handled. the number of additional draws n follows a geometric distribution with p = 1/2
3. if there is a closed-form expression for critical/fumble conditional on n, just generate a flag with that

In any case, this is just an interesting math exercise, and should not be considered a solution to the original one — it is my understanding that it is meant to display various languages coding the same algorithm.

2 Likes

Yeah that’s pretty much what my version does. I would like to derive that expression though, so that I can fully trust it (maybe that’s something I’ll get to today).

And that’s a fair point about the deviation from the main algorithm. I’m okay with it not being a “real” solution because it was an interesting thing to code, and some fun math. And I think it works out to be probabalistically the same.

I think the following should work: take a coin with probability p for H, 1-p for T. Let

1. A_n denote the probability of having seen two consecutive H after n draws,
2. B_n denote the probability of not having seen two consecutive H after n draws, but the last draw being H.

Then you can calculate these recursively as

A_n = A_{n-1} + B_{n-1} \cdot p\\ B_n = (1 - A_{n-1} - B_{n-1}) \cdot p

The first one follows from drawing H after a sequence ending with H, the second just follows from the coin being IID. The base case is A_1 = 0, B_1 = p.

One can either iterate this (still orders of magnitude cheaper than MC, and the calculation can be done exactly with Rational{BigInt}), or obtain a closed form using standard techniques. Something like

function critical(p, n)
A, B = zero(p), p
for i in 2:n
@show A, B
A, B = A + p * B, (1 - A - B) * p
end
A
end

julia> critical(1//2, 20)
(A, B) = (0//1, 1//2)
(A, B) = (1//4, 1//4)
(A, B) = (3//8, 1//4)
(A, B) = (1//2, 3//16)
(A, B) = (19//32, 5//32)
(A, B) = (43//64, 1//8)
(A, B) = (47//64, 13//128)
(A, B) = (201//256, 21//256)
(A, B) = (423//512, 17//256)
(A, B) = (55//64, 55//1024)
(A, B) = (1815//2048, 89//2048)
(A, B) = (3719//4096, 9//256)
(A, B) = (3791//4096, 233//8192)
(A, B) = (15397//16384, 377//16384)
(A, B) = (31171//32768, 305//16384)
(A, B) = (7869//8192, 987//65536)
(A, B) = (126891//131072, 1597//131072)
(A, B) = (255379//262144, 323//32768)
(A, B) = (256671//262144, 4181//524288)
1030865//1048576


(This may have bugs, or analytical mistakes, I didn’t spend a lot of time on it).