I’m writing some code that solves
A*x=b, where A can be quite big (say 10000x10000, hopefully bigger in the future). A is not sparse (at least for now) and might be poorly conditioned. Might even have multiple solutions.
A\b is not working great so far, giving me results with fairly large errors (i.e.,
A*(A\b) is sometimes 3 times a certain element of x). Directly inverting A (
x = inv(A)*b) doesn’t use much more memory, is 3x slower (I expected worse), but very accurate. However, this does not run on the GPU.
People use GMRES in my field for these matrices, but it seems IterativeSolvers.gmres does not run on the GPU either.
Anyone know of a good way to solve these problems that works on CPUs and GPUs? Speed is very important, but accuracy is more. Differences of 0.1% could be ok, 10-300% (which is what I’m getting), not really.
Thanks a lot!
EDIT: TL;DR, A\b is only accurate using Float64 on the GPU, not Float32. As long as there isn’t a simple and robust linear solver that runs on multiple CPUs/GPUs, A\b seems to be the way to go (and I stick to 1 CPU/GPU for now).