NVIDIA Finally Adds Native Python Support to CUDA

At GTC, NVIDIA announced native support and full integration of Python in its CUDA toolkit. Developers will be able to use Python for direct execution of algorithmic-style computing on GPUs.

“We’ve been working hard to bring accelerated Python, first class, into the CUDA stack,” said Stephen Jones, CUDA architect, during a presentation at the recent GTC conference.

For programmers, the implications are massive. CUDA was born from C and C++, and now coders don’t need knowledge of those programming languages to use the toolkit.

“Python for CUDA should not look like C. It should look like Python,” Jones said.

Coders can use natural Python interfaces and the scripting model of calling functions and libraries to create AI programs for execution on NVIDIA GPUs.

“Python CUDA is not just C translated into Python syntax. It’s got to be something which is natural to a Python developer,” Jones said.

3 Likes

NVIDIA’s strategy to ensure potential Mojo users choose their product—and only theirs. :blush:

From the same link:

" NVIDIA is pouring fuel into recruiting programmers and wants to support more programming languages, including Rust and Julia."

2 Likes

I don’t think there’s anything in Stephen Johnson’s talk actually backing that statement. Looks like the author of the article just wanted to add an extra internal link to their own website.

1 Like

did you find any example of such functionality? Benchmarks?

1 Like

has any one here also try the Mojo? not sure if it is as good as they claimed :slight_smile:

1 Like

I don’t know : the goal of Mojo (stay close to Python’s syntax/semantic) was not very attractive for me.