NVIDIA Tensor Cores not useful for double-precision simulations?

Thank you for your answers! Especially about the MMA operation - I’ll do some more readings (homework) to understand more of your answers.

For the question whether Tensor Core is being used - my typical application involves: (1) Load CUDA.jl or ArrayFire.jl, (2) Convert some large CPU arrays into GPU device arrays defined by the library; (3) Perform library-defined linear algebra function and/or 1D FFT/IFFT on the GPU arrays, using the GPU; (4) Convert the result back to CPU arrays; (5) Rince and repeat. All calculations are done in double precision (64-bit) floating numbers.

Do you think the library (CUDA.jl or ArrayFire.jl) would make use of Tensor Cores in these calculations for me, automatically?