Solving 2D poisson equation in polar coordinates (finite differences)

Hello all,
I am looking to solve the following poisson equation
\nabla^2 \psi = -\omega \quad \text{and} \quad \nabla^2 = \frac{\partial ^2}{{\partial \rho^2 }} + \frac{1}{\rho }\frac{\partial}{\partial \rho} + \frac{1}{{\rho ^2 }}\frac{\partial ^2 }{{\partial \phi ^2 }}.

Where \omega is known over a rectangular domain of grid points of N by M .
I am using finite differences to represent the derivatives of \psi

I am curious whether there are any pre-written Poisson solvers, I have seen a related post below, but didn’t know how to apply any of the recommendations. I have experimented a little with sparse matrices and made a solver on my own, however I would much rather adapt something that is pre-written and that has been verified to be working correctly and efficiently.
Maybe something from ApproxFun.jl or DifferentialEquations.jl?

Any help is very much appreciated.
Many thanks

https://gridap.github.io/Tutorials/dev/pages/t001_poisson/

Many thanks @stevengj , I did have a read through the gridap.jl documentation for Poisson solvers, but it was quite tricky for me to understand especially since I haven’t used Finite Elements.

Do you happen to know of any matrix packages (for finite differences) that have already been written for solving Poisson equations?

Handling boundary conditions is the tricky part in finite differences. That is easy with finite elements (such as gridap).

https://github.com/SciML/MethodOfLines.jl

MethodOfLines.jl should handle this automatically. It’s a little early in its development so just open an issue if you run into any issues, but there are test problems with Poisson’s equation already:

https://github.com/SciML/MethodOfLines.jl/blob/master/test/pde_systems/MOL_NonlinearProblem.jl

so in theory it should be fine.

2 Likes

It is possible in MultivariateOrthogonalPolynomials.jl, and Poisson was one of the examples in an older version that was built on top of ApproxFun.jl: https://github.com/JuliaApproximation/MultivariateOrthogonalPolynomials.jl/blob/master/examples/Disk%20PDEs.ipynb

However, the code has gone “stale” so will be hard to get working, and assumes one can sample ω anywhere, not just a rectangular grid.

1 Like

Hi Aran !
I implemented the discretization of a 2D poisson equation in polar coordinates with finite differences as an example for a paper on a new Krylov method specialized for nonsymmetric linear systems.
The code is available here with an example.
poisson

I based my implementation on the following article: https://onlinelibrary.wiley.com/doi/10.1002/num.1

3 Likes

If you don’t get a symmetric matrix with Poisson’s equation, you’re doing it wrong, because -\nabla^2 (with appropriate boundary conditions, e.g. Dirichlet, Neumann, or periodic) is a symmetric linear operator (and is also positive definite or semidefinite). (This is still true even for non-constant coefficients, e.g. if you have -a\nabla \cdot (b\nabla) for scalar functions a(x), b(x) > 0, as long as you use an inner product weighted by 1/a.) This symmetry (+ definiteness) has rather fundamental qualitative consequences for the physics of problems with \nabla^2, especially diffusion and wave problems, so you really want to preserve it in your discretization.

In particular, in cylindrical coordinates, \nabla^2 is symmetric with a weighted inner product (the r \, dr Jacobian factor of polar integrals), and so you should be able to get a symmetric matrix with a diagonal change of basis if you discretized correctly.

(Galerkin FEM discretizations automatically preserve the symmetry because the corresponding weak form is a symmetric bilinear form.)

3 Likes

Thanks for your explanations @stevengj !

The resulting linear systems is unsymmetric in my case because the second term of the following PDE is discretized with centered differences (u_{i+1,j} - u_{i-1,j} / (2 \Delta r).
polar
Is it also possible to keep the symmetry with finite differences discretizations ?

To make it easier to understand, you’ll want to write the first two terms together in a more symmetrical form:

\frac{\partial^2 u}{\partial r^2} + \frac{1}{r} \frac{\partial u}{\partial r} = \frac{1}{r} \frac{\partial}{\partial r} \left( r \frac{\partial u}{\partial r} \right)

which is now clearly a symmetric operator under the weighted inner product \langle u, v \rangle = \int u v \, r dr with appropriate boundary conditions. (Understanding the symmetry of the infinite-dimensional operator is really critical to discretizing it properly.)

Now, suppose you discretize the \partial / \partial r first-derivative operator by an (n+1)\times n matrix D which takes in n values \vec{u} on a grid G, plus two points on the boundaries (I’m assuming Dirichlet boundaries for simplicity) and returns the centered differences at n+1 points G' (halfway between the original grid points including the boundaries). After a little algebra, the discretized second-derivative operator above is then the n\times n matrix operation:

\left. \frac{1}{r} \frac{\partial}{\partial r} \left( r \frac{\partial u}{\partial r} \right) \right|_G \approx -R^{-1} D^T R' D \vec{u}

where R is a diagonal n \times n matrix of r values on G, R' is a diagonal (n+1) \times (n+1) matrix of r values on the grid G', and -D^T turns out to be the center-difference operator on G' that gives you the approximate derivative on G. But -R^{-1} D^T R' D is clearly similar to a symmetric (and positive-definite!) matrix under a diagonal change of basis \vec{v} = R^{1/2}\vec{u}, which corresponds exaclty to the r Jacobian factor in the inner product for the continuous operator. Equivalently, it’s symmetric under a weighted inner product \langle \vec{u}, \vec{v} \rangle = \vec{u}^T R \vec{v}. (Any Krylov method, such as conjugate gradient, is easily adapted to support modified inner products.)

The above works if your domain is an annulus with Dirichlet boundaries, but if your domain includes the origin, you need to be a little more careful in setting up finite differences because of the coordinate singularity. You typically want to impose Neumann and not Dirichlet boundaries at the origin, and you may want to choose your discretization so that the grid G does not include the origin (so that you don’t divide by zero), or otherwise you can work out the derivative rule at the origin by a limiting procedure.

(This can be combined with a periodic center-difference discretization in θ, which is trivial symmetrical, using a Kronecker product, and the Kronecker product of symmetric matrices is symmetric.)

8 Likes

This also means that Poisson is probably a poor test case for non-symmetric iterative methods — even if you discretize it badly and get a non-symmetric matrix, it is close to being similar to a symmetric matrix (because it is converging to a symmetric operator as you refine the discretization). So its eigenvalues will be close to the real line and its eigenvectors will be close to orthogonal (under the r-weighted inner product), which often has a big impact on Krylov methods.

You may want to test with something that is much farther from Hermitian, e.g. a Helmholtz equation with PML absorbing boundaries. See this example Julia code for such an absorbing-Helmholtz finite-difference solver in 2d via Kronecker products.

4 Likes

I have experimented a little with sparse matrices and made a solver on my own, however, I would much rather adapt something that is pre-written and that has been verified to be working correctly and efficiently

Your problem seems simple enough that actually it could be easier to learn how to verify and optimize your own code than to try to adapt an existing one (which you would still need to verify in order to make sure you don’t have any wrong signs, aren’t inputting data in an incorrect order, etc).

To verify your code, you need to set up a “test” against a known exact solution. If you don’t know any exact solution, you can “manufacture” one by yourself as follows:

  1. Pick any known analytical function that satisfies the boundary conditions for your problem
  2. Apply the Laplacian operator to it
  3. Set that as -\omega
  4. Apply your solver with the right hand side you just created.

As for performance, as long as you are using a sparse matrix solver you should be doing fine.

Of course, there exist advanced methods in the literature like spectral solvers, or FEM, but I believe it is also ok to use second-order finite differences as well… as long as you are using moderate values of N,M, or don’t need to call this as a part of an expensive fluid solver, etc.

Regards!

2 Likes

Hi @amontoison, this is absolutely great! Thank you so much for code sharing. This really will really help going forward. I have written a similar sort of sparse matrix structure using code based on this cartesian version. Once I have tested version up and running I will post back here. Have you tried the methodoflines approach recommended by Chris in this thread?

This Kronecker product operator is absolutely amazing! Thank you so much for the notes link. Is this a commonly used operator? I have never seen it before and feel like a complete novice now that I have! I feel that my lecturer on tensor products never quite bridged the gap between theory and application.
Once again, thank you for your kind reply.

In case of separable variables like the Poisson equation on a Cartesian grid with constant (or separable coefficient) you may consider direct solvers based on tensorial decomposition of the Laplacian operator like https://github.com/triscale-innov/LidJul.jl/blob/master/src/poisson2D_TT.jl

1 Like

Yes, it’s a standard way to combine linear operators acting along different dimensions of an n-dimensional grid and write multilinear tensor-product operators as ordinary matrices acting on flattened (vectorized) column vectors, and is incredibly useful for expressing finite-difference equations on tensor-product grids.

1 Like

If you write things in the correct form, you don’t need to limit yourself to separable coefficients to use tensor products. But then you can’t use tensor-product direct solvers (analogous to exploiting separable solutions analytically) and instead need to use generic sparse-direct solvers (which are quite fast in 2d) or more complicated methods (e.g. iterative methods, alternating direction preconditioners, etc).

In particular, you write down a sparse gradient operator G via Kronecker products of 1d finite-difference operators on your Cartesian/tensor-product mesh, then an operator like a \nabla \cdot (b\nabla) with arbitrary variable coefficients a(\vec{x}) and b(\vec{x}) can be written as a matrix -A G^T B G where A and B are diagonal matrices. As a bonus, writing the discretization in this form guarantees that your matrix is symmetric negative-definite (under a diagonal change of basis \sqrt{A}), like the original continuous operator.

See e.g. this example notebook (which is for a pre-1.0 version of Julia … an updated version of the code is here) demonstrating a Kronecker-product construction of -\nabla \cdot (c\nabla) for arbitrary non-constant coefficients c.

3 Likes

I am trying to implement this equation with the finite difference method as described. But after reading the explanation of stevemgj, I am confused. I am not a physicist. can you please clarify if this method is correct and use be used as a test case?

Dear amontoison,

In this Julia implementation, what are the values of functions, f, and g that appear in the differential equation? I don’t see them in the code? could please clarify? Further, the code only returns A, b, but does not have any technique to solve the Au= b equation for u?

Sorry for the delayed response @BJR,

f and g are defined here.
I only return A and b because I use this linear system to test the different methods in Krylov.jl.
For instance, BiLQ can solve this linear system.

With these functions f and g, we know that exact solution of the PDE.

1 Like