I’m kinda starting with all this CUDA stuff, and am working with different GPUs on different computers (home, work, etc…). I’m building my first kernels, and one of the things that I need to know is the size of the grids, blocks and threads my GPUs have available. Can I get that information from within my Julia code?
Yes, you can use CUDAdrv.jl for that which exposes the CUDA driver api. For example: https://github.com/JuliaGPU/CUDAnative.jl/blob/d654cba72ce3b44d4bb33dcd5651638947eae7cc/examples/pairwise.jl#L92
Easier though is to use to occupancy API to just let you know how many thread/blocks to use (check the above file on the master branch).
AH, I see… in that way I get the number of threads I can use. What about the max number of blocks and grids I can have?
Check out the CUDA documentation (or grep the CUDAdrv.jl source code); these enumerations come from there: https://docs.nvidia.com/cuda/cuda-driver-api/group__CUDA__TYPES.html