How to choose a GPU - Please forgive the total noob question

So, I figured there would be some very well informed opinions on here for this rather basic (but essential question). I’ve had a quick look and can’t find a similar question.

I appreciate the answer is likely to be “it depends”. So here is what I’m trying to work out.

Let’s say my main desire to have a “well balanced” GPU. Good for ML, Good for massively parallel custom written stuff (say game theory tree exploration), perhaps even (dare I say it) good for gaming.

I’ve been trying to make sense of FLOPS (SP and DP) per dollar. Is this a useful way to think about it? Why is the ratio between SP and DP usually so high? And what is it about the nvidia 1660 that they had a lower ratio?

What about the GPU on-board RAM, what sorts of tasks tax that? Should that form part of the decision?

Do ray tracing GPUs bring any useful features for GPGPU /computation.

Many thanks in advance for your thoughts, and if this has already been answer comprehensively elsewhere, please accept my apologies and point me the right way.

Stay safe!

R

Good questions. First you need to look at the ‘compute capability’ of the GPU
This tells you which levels of CUDA the GPU is compatible with.

Next the choice is between data centre grade GPUS like V100 and A100
and the consumer grade - these being Titan cards. Titan Black cards were very popular a year or two ago for deep learning.

I know I am not answering your questions directly. I work for Dell as an engineer and really should know better here… in my defence I deal with datacentre projects and have just done my Nvidia A100 training - so I would say the A100 features of a new integer data type and partitionign the GPU are the cat’s pyjamas. I would guess you are not in this market!

Please ping me your email address and I can put you in touch with one of our product specialists.

3 Likes

thanks for the rapid response… I’m also constrained by cost… so much as I’d love a multi thousand dollar card, I’m looking to find a “sweet spot” in the hundreds…

Will drop you an email!

Although I like your question, shouldn’t this be under Offtopic category? (It’s a great place to discuss anything, don’t worry.)

Nvidia seem to push the Titan RTX. Compute capability 7.5 and they say it is a pretty good gaming card too. I will then second this. With the disclaimer that a latest generation card will cost $$$
2850 UK pounds on Amazon UK - wooowoiieee…

sorry! thought that “GPU” tag was the place to discuss “GPUs” :slight_smile:

Not sure how to move it - happy to be told how to!

yeah, that would be lovely! But money can always buy an awesome solution. I sadly don’t have that sort of budget…hence the question (sorry for not making that clear)

interesting, I don’t see the 1660 series in the list of CUDA GPUs, does that mean they are not CUDA compatible??? Surely not?

I found this, which answers my own question here…

I got it, it’s alright! I forgot that you need “Regular” trust level to change post’s category, so I fixed it for you.

1 Like

The one and only place I go to for performance comparisons on these things is Anandtech. Here is a direct link to their GPU benchmarks GPU 2019 Benchmarks - Compare Products on AnandTech

2 Likes

All that shiny data! thank you!

just so I don’t make the mistake again. How did you come to the conclusion that this was “off topic” and not worthy of the “GPU” tag?

2 Likes

I think the distinction is that the GPU tag is for specific questions about using Julia with gpus, not the hardware itself.

2 Likes

Thanks Oscar, I guess I thought that by going to that forum I’d be asking the people with the domain expertise (and getting there attention since I was in the the "right place for GPUs)… anyway… thanks for the guidance!

Used to be, always go for the 70 series. 970, 1070, etc., until the recent one, where 2060 is the bang for the buck king. That’s quite powerful but still a good value. 2070 is only a bit faster but a lot more, and 2080 is only a little bit faster and a lot lot more, and even the 2080Ti is not too much faster but a ton more. Meanwhile 1660 is a lot slower.

4 Likes

thanks so much for the suggestions so far. having accidentally taken us off down the rabbit-hole of forum decorum - I wondered if anyone had any thoughts on my original questions?

Specifically:

How do you think about choosing a GPU (on a budget!) - ie, less of the what (which is gratefully received!) but more of the how…

Why is the ratio between SP and DP usually so high? And what is it about the nvidia 1660 that they had a lower ratio? (I think I found the answer to this and it’s to do with not having Tensor cores and introducing SP cores instead…)

What about the GPU on-board RAM, what sorts of tasks tax that? Should that form part of the decision?

Do ray tracing GPUs bring any useful features for GPGPU /computation.

Thank you! :grinning:

I think it is much more simple than that, although I don’t do any GPU computing currently (so my opinion might be useless). Get the best GPU that your budget allows for. And if your purpose is mainly compute rather than gaming, then get Nvidia because that infrastructure is much more built-out (especially in Julia).

The bulk of GPU computing right now is machine learning types of workloads. So if you are doing that, it is worth following the advice above. Right now, I don’t even have a dedicated GPU and I do only CPU compute – it all depends on your workloads.

2 Likes

To sell more expensive cards. Literally, at one point it wasn’t so bad, but then that cut into the data center GPUs, so Nvidia stopped people from using it for FP64 computing through a driver setting. Everyone found out and disabled that to get back the FP64 compute, so they started destroying that part of the cheaper GPUs and sells the same chips with a really high markup for the FP64 (yes, same exact GPU chip in both the $5000 card and the $500 card). Then people started using FP32 a bunch in ML, so they had to ban those from data centers using licensing. Seriously though, if you didn’t live through this story, it’s mind boggling.

The driver story is mentioned in: https://arrayfire.com/explaining-fp64-performance-on-gpus/
How the Titan Black was the king of FP64 from like 2013 to 2018: Reddit - Dive into anything
The new data center ban: https://www.cnbc.com/2017/12/27/nvidia-limits-data-center-uses-for-geforce-titan-gpus.html

So that’s why FP64 is slow, and why people kept buying the Titan Black on the used market years later.

Everything. You will run out of RAM, and you will change your method so it just barely fits.

Not right now, and it’ll be hard to program for unless you have to multiply 4x4 matrices a lot.

5 Likes

One place where ready tracing cores are very useful is 3x3 convolutions (for Neutral networks). In some workloads, this can be an almost 2x speedup.