How to install Pytorch with cuda support through CondaPkg

Hi All,

I would like to ask, if anyone knows how to install pytorch with cuda support through CondaPkg. It has to happen that I need to call some language models and I have opted to use transformers from HuggingFace, as sad as it can be. But I have spent quite few days figuring out, how to install pytorch with cuda support, because if I do just ] conda add pytorch, it CondaPkg will install torch without CUDA support. I have ended manually editing .CondaPkg/pixi.toml, add line

[system-requirements]
cuda = "12.4"

and than invoking

~/.julia/artifacts/cefba4912c2b400756d043a2563ef77a0088866b/bin/pixi add pytorch transformers accelerate

in the .CondaPkg directory. Ugly hack indeed it is, but it made it work at least. But if possible, I would like to know the correct solution.

Thanks for answer.
Best wishes.

There’s nothing in CondaPkg for this directly, no.

Looks like those settings are implemented as conda virtual packages. I wonder if you can add a virtual package in the same way, try CondaPkg.add("__cuda", version="12.4")?

If that doesn’t work, we could easily enough add settings to CondaPkg for CUDA versions etc.

You can set the environment variable CONDA_OVERRIDE_CUDA=12.1 which appears to be supported by conda, mamba AND pixi.

You can also just add the pytorch-gpu package to your project I think.

Hi,

thankfor answer. I have tried pytorch-gpu, but that did not work. I will try to other solutions.