Not necessarily, I was just confused by the output. memory_status() should probably show 16GiB too, but I’m not sure if the CUDA API exposes a way to query the full available memory range.
Not necessarily, I was just confused by the output. memory_status() should probably show 16GiB too, but I’m not sure if the CUDA API exposes a way to query the full available memory range.