Which GPU should I ask for?

Hello, we have a remote server (not VPS) and I would like to ask the company managing it to add a GPU for machine learning computations (using CUDA.jl I suppose).
Is there a list of supported GPU?
How they are organised ? Which serie in the entry/midrange would you suggest ? (but not too entry that it becomes worst that using good CPUs) ?

I did found this: https://timdettmers.com/2019/04/03/which-gpu-for-deep-learning/

In the end it depends on your budget, tasks and future julia ecosystem updates.
If you’re more into the image processing-deep learning-cnn camp, here’s my(limited) thoughts:

My experience led me to buy a 12GB Titan X (second hand it was ~ 200$:)) because I encountered a lot of out of memory issues with large CNNs (if you search around the forum or Flux/Knet github issues, you will see users having such problems).
The new 2xxx Nvidia gpus are cool, especially because of the 16bit compute capability. As far as I know, no Julia deep learning library has 16 bit support so at the moment they might be a bit overkill (they’re also expensive). Knet does not support fp16, according to the manual. If you have the budget and want to be future-proof, you could buy such a card, hoping that 16 bit support will arrive in the future.
For me the memory issue was quite severe, having problems even on a 8GB GTX1070 during training networks like VGG16 or YOLO from scratch, so I bought the 12GB Titan.

In terms of software compatibility, CUDA.jl should support most of the NVIDIA GPUs from the last 6-7 years at least (mine is from 2015).

2 Likes

What problems are you facing with using 16 bit computations in Flux? As I understand, everything should be generic wrt the floating point type, but the default is 32 bits.

Whoops, sorry, that was a mistake. It’s Knet that supports only 64/32bit.