Mathematical calculations accelerated by GPU

Hello dear GPU-interested Julia members

Although I am still far away from being familiar enough with Julia, I am making plans to utilise a GPU for future number crunching. I want to underline: mining with graphic cards is not my goal, but only number crunching for mathematical calculations accelerated by a GPU in conjunction with Julia.

Such a PCI-Express graphic card with GPU should have which specifations as an absolute minimum for the mentioned number crunching?

What is your input for me regarding this?

Michael

I can only guide you to GPU programming in Julia workshop form JuliaCon 2021 for more information.

If you go with the recommendation of NVida & Cuda then you can choose from one of 100s of NVidia models

https://juliagpu.org/

Or for some applications

AMD’s ROCm

But even some integrated Intel GPUs can be used

https://juliagpu.org/oneapi/

on one of their supported cards

1 Like

I would suggest for example nvidia quadro rtx 2060 or gtx 1050. I have a much cheaper nvidia graphics card (Quadro P400) which works with Julia, but it is slower than the CPU and thats probably not what you want. De facto the choice is very small because most cards are sold out these days…
The GTX 1050 might be the cheapest option that provides a significant acceleration compared to the CPU. But it also depends a lot on the amount of RAM needed for your problem and on the question which floating point size is needed.
Also check: CUDA Benchmarks - Geekbench Browser

CUDA.jl currently also requires a CUDA-capable GPU with compute capability 3.5 (Kepler) or higher, and an accompanying NVIDIA driver with support for CUDA 10.1 or newer.

1 Like

I am thinking about large numerical series and the calculations to get the values are only integer math.
All four basic arithmetic operations are only done with integer numbers, and from the division results there will be only kept the integral values.
How does this influence the decision which graphics card with which GPU is recommended?

Integer operations are often slower than floating point operations on consumer grade GPUs:

And 64 bit is a lot worse than 32 bit. Which size of integers do you need?