Why is there type instability in this function?

I have the following toy function implementing a backward difference formula (homework):

function newton_iteration{T}(x0::T, F, J; tol=2*eps(), testfun=norm)::T
    x = x0 - J(x0) \ F(x0)
    while testfun(x) > tol
        x -= J(x) \ F(x)

function backdiff2(x0, t0, tf, n::Integer, F, J)
    h = (tf - t0) / n
    solution_vector = Vector{typeof(x0)}(n+1)
    solution_vector[1] = x0

    # First step uses Euler.
    solution_vector[2] = x0 .+ h.*F(x0,t0)

 	# The rest of the steps use Newton's method to solve the
    # implicit formula.
    for i = 3:n+1
	    t = t0 + (i-1)*h
	    # f defines the mapping whose root is the next value of x.
	    f = x -> x .- (4/3).*solution_vector[i-1] .+
		    (1/3).*solution_vector[i-2] .- (2*h/3).*F(x, t)
	    # j defines the Jacobian of the above:
	    j = x -> eye(length(x)) .+ (2*h/3).*J(x,t)
	    solution_vector[i] = newton_iteration(solution_vector[i-1], f, j)

When inspected with @code_warntype it seems that the loop index i is being boxed, and wherever the uncertainty about i is coming from, it propagates to t and indexing operations using i (if I’m reading the output right). No annotation I tried got rid of the instability. What is the cause?

Sample call that generates the above described output:

using FixedSizeArrays
F(z, t) = Vec(z[2], -z[1])
J(z, t) = Mat{2,2,Float64}((0, 1), (-1, 0))
@code_warntype backdiff2(Vec(1.0, 0.0), 0., 9*pi/4, 100, F, J)

Hopefully no typos in that… I’ll check again when I’m not on the bus.

Please provide the calling code and the @code_warntype expression you are running.

i should be type stable since it’s only an integer, and nothing changes that. It may be boxing in the @code_warntype but not actually have an instability. Could you check a little lower and see if there’s an instability?

Also, could you post a full runnable example? Complete with the functions you’re calling this on?


You can make this a little better using the UniformScaling operator I:

j = x -> I .+ (2*h/3).*J(x,t)

this will get rid of a length(x) x length(x) allocation each iteration, which can be quite large!

There’s usually a transformation that people do here to avoid the NxN operation of multiplying J by a constant too, if you’re looking for speed. Just divide the whole algorithm by the constant, and change the other coefficients.

I added an example to my post. I think it’s identical to the code I observed the behavior in.

I added the minimal usage example I was using before.

Thanks for the performance tip - I’m not really worried about performance since it’s just a numerical verification of an order of accuracy proof. I’m always filing things away for later, though.

Very likely caused by

1 Like