Using PreallocationTools with LinearSolve

I’m trying to solve a large 2D PDE using this example as a template. I’m effectively trying to solve some simplified (though still nonlinear) pseudo-Boussinesq Navier Stokes equations where I evolve u and temperature and solve for w using \frac{\partial u}{\partial x}+\frac{\partial w}{\partial z}=0. This becomes a system of equations that I have to solve for w at each time step (with my problem geometry I can assume variations in w are small, so divergence in u should be the only thing driving w)

I went through many iterations before realizing that the minimal non-working example is actually very straightforward. This problem is trivialized for simplicity, but it highlights the line that is giving me an error.

using DifferentialEquations, SparseArrays, LinearSolve, PreallocationTools

function ddt(dy, y, p, t)
    Nz,Nx,A_w,linsolve,w,divu = p

    ### I'm not certain I understand how get_tmp works, but I know it doesn't work without it.
    divu = get_tmp(divu,y)
    w = get_tmp(w,y)
    

    ### Ultimately, I would be computing "divu" before and then doing something with "w"
    
    ### This approach works when I comment out the line that makes "A_w_" sparse.
    # w[2:end-1,2:end-1] = reshape(solve(LinearProblem(A_w,reshape(divu,Nx*Nz))).u,(Nz,Nx))


    ### I can't make this approach work regardless of sparisty
    w[2:end-1,2:end-1] = reshape(solve(LinearSolve.set_b(linsolve,reshape(divu,Nx*Nz))).u,(Nz,Nx))


    ### Set all to zero for simplicity
    dy[:,:,:] .= 0.
end

Nx_= 10
Nz_= 10

A_w_ = Matrix(1.0I, Nx_*Nz_, Nx_*Nz_)
A_w_ = sparse(A_w_)

### Always going to use the same A matrix, so LU factorize and cache.
bw = ones(Nz_*Nx_)
linsolve_ = init(LinearProblem(A_w_, bw))
sol1 = solve(linsolve_)

w_ = DiffCache(zeros(Nz_+2, Nx_+2))
divu_ = DiffCache(zeros(Nz_, Nx_))

p = Nz_,Nx_,A_w_,linsolve_,w_,divu_

y0 = zeros(Nz_+2, Nx_+2, 2)

prob = ODEProblem(ddt, y0, (0.0, 1.0), p);
sol = solve(prob, FBDF(),save_everystep=false);"

I get an error that resembles:

First call to automatic differentiation for the Jacobian
failed. This means that the user `f` function is not compatible
with automatic differentiation. Methods to fix this include:...

As I’ve indicated in my comments, I can make the first method work if my A matrix is not sparse, but I cannot get the second approach (caching the LU decomposition of A) to work at all. If I allocate divu and w (and don’t pass them in through p), the code also works fine, but is far slower.

I think my issue has something to do with a misuse of PreallocationTools.jl (likely that I’m not applying it to linsolve_?) but I’m having a hard time understanding how to do it. It looks like LazyBufferCache might be what I need somehow, but can’t wrap my head around what it does. Any help would be great.

1 Like

Try forcing it to SparspakFactorization(), i.e. w[2:end-1,2:end-1] = reshape(solve(LinearProblem(A_w,reshape(divu,Nx*Nz)),SparspakFactorization()).u,(Nz,Nx)). If that works I can update the automated algorithm to handle this case.

Yeah for that you should probably use the GeneralLazyBufferCache.

lbc = GeneralLazyBufferCache(function (y)
    init(LinearProblem(A_w,similar(y,Nx*Nz), SparspakFactorization())
end)

and then prob = lbc[y] gives you a dual/non-dual prob, and then

w[2:end-1,2:end-1] = reshape(solve(LinearSolve.set_b(linsolve,reshape(divu,Nx*Nz))).u,(Nz,Nx))

should work. Note I didn’t try all of this code and just writing this on the fly since I’m short on time, but thought you might want the untested pointers instead of nothing.

I appreciate you taking the time. It’s giving me more avenues to try. Somewhat blindly permuting different parameters seems to have worked!

The first approach gave me the same error as before.

I wasn’t able to get the second approach to work, but “SparspakFactorization()” seems like it was the cause. Once I removed it and just defined lbc as follows, it worked.

lbc = GeneralLazyBufferCache(function (y)
    init(LinearProblem(A_w_,y))
end)
linsolve_ = lbc[bw]
sol1 = solve(linsolve_)

Thanks, Chris. Very much appreciated.