Trying to implement Miller's recurrence algorithm and getting terrible results

I am currently trying to implement Miller’s algorithm but Im getting terrible numerical results, numerical errors are getting to 80% of original values. Is there an issue with my code?

function besselj(orderLim, arg::BigFloat, tol)
    N = orderLim * tol
    values = Vector{BigFloat}(undef, N+2)    
    values[N+1] = 1
    values[N+2] = 0
    for i in N+1:-1:2
        values[i-1] = (2 * i / arg) * values[i] - values[i+1]
    end 
    norm = values[1]
    for i in 2:N+2
        if (i-1) % 2 == 0
            norm += 2*values[i]
        end
    end
    norm = 1/norm
    values = values[1:orderLim+1]
    return norm.*values
end

This won’t help with accuracy, but your second loop could be written

    for i in 3:2:N+2
        norm += 2*values[i]
    end
1 Like

thanks! changed it.

values[i-1] = (2 * i / arg) * values[i] - values[i+1]

does not account for the index offset.
It must be

values[i-1] = (2 * (i - 1) / arg) * values[i] - values[i+1]
1 Like

yep, that solved it! Dumbest of mistakes :laughing:, sorry to bother and thank you guys very much @Oscar_Smith and @Vasily_Pisarev

I didn’t look too carefully at the code but it’s possible to do this with just one loop over the data. You should probably unroll your loop by a factor of two and then just add every second element to your accumulator. This will actually be more accurate as you are summing the values small to large. The issue is that you need to generate appropriate trial values of the ratio of Jnu/Jnu+1 first. This usually includes generating these trial rates with the starter values like you did but you need to actually throw away those values. You need to figure out the appropriate starting value before starting downward recurrence which is some M above your highest order or largest argument. The algorithm is rather slowly converging for large argument but the issue is that the starter M is a function of your desired precision. For double precision it’s usually like 7-10 but for big float precision it depends. I don’t think this algorithm is super amendable to arbitrary precision calculations because of that but you can determine the factor for a desired precision if you want say Float128 precision. But if your arguments are much larger than the order you’ll probably want to generate starting values differently than the trial recurrence method.

Heeeey, you again :laughing:, first of all every insight that you give is awesome and SUPER helpful, thank you so much, seriously. Ive found an article to change the trial values calculation by R.E. Scraton that should make precision better and other on error analysis by F. W. J. Olver in order to define the right M that I want and not just a nonsensical multiplication based on nothing as I was doing, I will try to implement those.