# How to return self recursive function?

I want to make a function that takes a function and vector of initial conditions as arguments.

``````function recseq(
a::T,
init::AbstractVector) where{T<:Function}

return n-> begin
if(n in init)
return init[n+1]
end
a(n)
end
end
``````

The idea is that `a(n)` is some recurrence relation like `a(n) = a(n-1) + a(n-2)` and `recseq` would return a function that returns either the initial conditions or a combination of those initial conditions base on `a(n)`

So the functionality would be something like:

``````a(n) = a(n-1) + a(n-2)
recseq(a,[0,1]).(0:10)
11-element Vector{Int64}:
0
1
1
2
3
5
8
13
21
34
55
``````

It’s not a specific answer to your request, but I wanted to try an exercise in using the Val{N} type

``````julia> a(n) = n<2 ? a(Val(n)) : (a(n-1) + a(n-2))
a (generic function with 1 method)

julia> a(::Val{1})=1
a (generic function with 2 methods)

julia> a(::Val{0})=1
a (generic function with 3 methods)

julia> a.(0:10)
11-element Vector{Int64}:
1
1
2
3
5
8
13
21
34
55
89
``````

more concisely

``````a(n) = n<2 ? a(Val(n)) : (a(n-1) + a(n-2))
a(::Val{n}) where n =1
a.(0:10)
``````

Not sure whether there would be an easier approach, but recursive behavior is not to hard to obtain if you replace the closure by an explicit callable `struct` (which is what would happen under the hood anyway, but by doing it yourself you gain more control).

Using a recurrence relation of the form specified in the OP seems tricky though. In the code below, I “cheat” by defining the recurrence relation in a (IMO) more explicitly way, in which the function arity corresponds to the recurrence order:

``````# This struct explicitly defines the state of your function object
# (previously, it was the variables that the closure captures)
struct RecSeq{N,T,U}
a    :: U
init :: NTuple{N,T}
end

# This defines the behavior of `RecSeq` as a callable object
# (previously, it was the closure body)
function (f::RecSeq{N})(n) where {N}
n < N && return f.init[n+1]
u = ntuple(Val(N)) do k # the function object is naturally named
f(n-k)              # which makes it easy for it to recursively
end                     # "call" itself
a(u...)
end
``````
``````julia> a(uₙ₋₁, uₙ₋₂) = uₙ₋₁ + uₙ₋₂
a (generic function with 1 method)

julia> fib = RecSeq(a, (0,1))
RecSeq{2, Int64, typeof(a)}(a, (0, 1))

julia> fib.(0:10)
11-element Vector{Int64}:
0
1
1
2
3
5
8
13
21
34
55
``````

(Of course you’d probably be much better off if you memoized the computation in this case, but I’m assuming your real use case is different…)

1 Like

I totally agree that @ffevotte’s answer is the most reasonable solution… however that shouldn’t stop us from having some fun, should it?

So first of all there here is some obscure knowlege: Julia has a form of `self` that denotes the current function. The exact incantation is `var"#self#"`.
Secondly, we need the recursive definition (called `a` in your example) to call back the wrapper function defined in `recseq` that provides the initial conditions. So we need to pass that into `a`.

So here is a solution without structs just because it is possible not because you should use it:

``````julia> julia> a(f, n) = f(n-1)+f(n-2)
julia> function recseq(
a::T,
init::AbstractVector) where{T<:Function}

return n-> begin
if(n in init)
return init[n+1]
end
a(var"#self#", n)
end
end
julia> recseq(a,[0,1]).(0:10)
11-element Vector{Int64}:
0
1
1
2
3
5
8
13
21
34
55
``````

Ok here is some extra fun: Say we don’t like the black magics of `var"#self#"`. Then we can still do it by summoning a demon of functional programming called the Y combinator. Observe that the anonymous function inside `recseq` needs access to itself… So what if we just passed it itself as an argument? Behold:

``````function recseq(a::T, init::AbstractVector) where{T<:Function}
recfun = (self, n)-> begin
if(n in init)
return init[n+1]
end
a(n->self(self,n), n)
end
return n->recfun(recfun, n) # essentially Y Combinator applied to recfun
end
``````

Thank you both for these excellent solutions!
I never would have thought of @ffevotte’s solution and it took me a while to figure it out.
But @abraemer’s solution still feels more natural to me, since I’ve mostly done functional programing in javascript.
What’s so evil about the Y combinator or the `self` keyword for that matter?

I would like to point out that @abraemer 's solution may have problems for seeds other than [0,1].
I believe the problem could be overcome by replacing the expression `n in init` with `n<length(init)`

``````
function recseq(I=[0,1])
a(n, I=I) = n <length(I) ? a(Val(n)) : (a(n-1) + a(n-2))
a(::Val{n},I=I) where n = I[n+1]
end

recseq([2,3]).(0:5)
recseq([5,8]).(0:5)
recseq([1,1]).(0:5)

recseq([1,3]).(0:5)
recseq().(0:5)
``````

Nothing evil if you are used to functional programming I guess. But probably a bit mind-boggling to the average (imperative) programmer and arguably a bit harder to understand. Also it is likely a bit slower then the alternative with the struct since Julia is not made for functional code (e.g. no tail-recursion).

@rocco_sprmnt21 is also correct that the logic for the initial values is wrong. I just copied it from the OP.

Why would it be slower?
If I’m under standing @ffevotte’s code correctly, it calculates `f(n)` `length(init)` times just like yours does and it does so with the same number of calls, I think. What would make it slower?

Let’s benchmark before we guess:

Code (above + some fixes and minor changes)
``````using BenchmarkTools

struct RecSeq{N,T,U}
a    :: U
init :: NTuple{N,T}
end

# This defines the behavior of `RecSeq` as a callable object
# (previously, it was the closure body)
function (f::RecSeq{N})(n) where {N}
n < N && return f.init[n+1]
u = ntuple(Val(N)) do k # the function object is naturally named
f(n-k)              # which makes it easy for it to recursively
end                     # "call" itself
f.a(u...)
end

fib_struct = RecSeq((uₙ₋₁, uₙ₋₂) -> uₙ₋₁ + uₙ₋₂, (0,1))

function recseq_self(a::T, init) where{T<:Function}
let N=length(init), init=init
return n-> begin
n < N && return init[n+1]
return a(var"#self#", n)
end
end
end

fib_self = recseq_self((f,n) -> f(n-1)+f(n-2), (0,1))

function recseq_functional(a::T, init) where{T<:Function}
let N=length(init), init=init
recfun = (self, n)-> begin
n < N && return init[n+1]
a(n->self(self,n), n)
end
return n->recfun(recfun, n) # essentially Y Combinator applied to recfun
end
end

fib_functional = recseq_self((f,n) -> f(n-1)+f(n-2), (0,1))

@show fib_functional.(0:10) == fib_struct.(0:10)
@show fib_functional.(0:10) == fib_self.(0:10)
const testN = 20
println("Struct-based")
@btime \$(fib_struct)(\$testN)
println("Self-based")
@btime \$(fib_self)(\$testN)
println("Y-combinator-based")
@btime \$(fib_functional)(\$testN)
``````

Output (tests with N=20):

``````Struct-based
71.939 μs (0 allocations: 0 bytes)
Self-based
42.185 μs (0 allocations: 0 bytes)
Y-combinator-based
42.395 μs (0 allocations: 0 bytes)
``````

So actually the `struct`-based approach appears to be slowest, which I did not expect.
Also I would have thought that the “Y-Combinator” variant is slower than the “self”-based solution but this is only a very minor distance. This tells us that function calls are not a major contributor to the total runtime, I think.

I want to add that this is a very simple case where Julia was able to properly infer all types. Once that fails programs get much slower. As a rule of thumb, recursive programs are harder to infer than non-recursive ones. So for complex code, a more direct style might lead to faster code due to imperfect inference. (recent thread on inference problems in a recursive context: Failure to optimize due to type recursion)

1 Like