Mathematical calculations accelerated by GPU

Hello dear GPU-interested Julia members

Although I am still far away from being familiar enough with Julia, I am making plans to utilise a GPU for future number crunching. I want to underline: mining with graphic cards is not my goal, but only number crunching for mathematical calculations accelerated by a GPU in conjunction with Julia.

Such a PCI-Express graphic card with GPU should have which specifations as an absolute minimum for the mentioned number crunching?

What is your input for me regarding this?

Michael

I can only guide you to GPU programming in Julia workshop form JuliaCon 2021 for more information.

If you go with the recommendation of NVida & Cuda then you can choose from one of 100s of NVidia models

https://juliagpu.org/

Or for some applications

AMD’s ROCm

But even some integrated Intel GPUs can be used

https://juliagpu.org/oneapi/

on one of their supported cards

1 Like

I would suggest for example nvidia quadro rtx 2060 or gtx 1050. I have a much cheaper nvidia graphics card (Quadro P400) which works with Julia, but it is slower than the CPU and thats probably not what you want. De facto the choice is very small because most cards are sold out these days…
The GTX 1050 might be the cheapest option that provides a significant acceleration compared to the CPU. But it also depends a lot on the amount of RAM needed for your problem and on the question which floating point size is needed.
Also check: CUDA Benchmarks - Geekbench Browser

CUDA.jl currently also requires a CUDA-capable GPU with compute capability 3.5 (Kepler) or higher, and an accompanying NVIDIA driver with support for CUDA 10.1 or newer.

2 Likes

I am thinking about large numerical series and the calculations to get the values are only integer math.
All four basic arithmetic operations are only done with integer numbers, and from the division results there will be only kept the integral values.
How does this influence the decision which graphics card with which GPU is recommended?

Integer operations are often slower than floating point operations on consumer grade GPUs:

And 64 bit is a lot worse than 32 bit. Which size of integers do you need?

1 Like

Thanks for providing that NVIDIA integer performance link :+1: !
I need at least 32 bit for the size of integers in the calculation, but a bigger size would be much more interesting, so 64 bit integers are very tempting.

Just for interest and if it’s not too private for you, the performance of the Quadro P400 did you compare against which CPU?

Intel® Core™ i7-7700K CPU

Thanks, Uwe.
My Desktop-PC has an Intel® Core™ i7-8700 CPU.

Regarding the graphics card, the nvidia quadro rtx 2060 is - at least at the moment - a little too pricy. I could get hold of a GeForce GTX 1650, but I have to look for two criteria you mentioned: the compute capability of that GTX 1650 and which CUDA version its driver supports.