The AI generated code wasn’t bad but insisted on a nested function:
function fib(n::Int64)::Int64
function fib_helper(n::Int64, a::Int64, b::Int64)::Int64
I tried just to move the nested function out, and then the allocations went away, and that version is fastest yet (30% faster than the other Julia version, if I recall, I found this out immediately, just forgot to post until now).
I’m not sure this code might be faster than from Mojo, or at least match. You could say it’s unfair since it’s a different algorithm, but I’m not so sure about that. The matrix version for sure is, this seems like an optimization a “sufficiently clever” optimizer could do.
I usually wouldn’t expect the optimizer to do such transformation, i.e. add the helper function, nested or not, but it could, and if not if someone can be nerd-sniped to add a compiler step, that calls OpenAI API (already possible with a Julia package), that asks for “give me the verbatim version back of the following function, in case you can’t improve it, but otherwise a TCO version, which is allowed to have non-nested helper function”, then that would be great!
It’s not immediately obvious to me why having the helper nested mean allocations, maybe that can be improved in Julia. It’s semantically equivalent otherwise, except that name fib_helper will be added to the global scope, which is non-ideal but not too bad. I at least think that will not happen with a nested function (their point?). [An anonymous function can’t be used I think since recursive.]