GPU and Thread-Parallel Support for Turing.jl

Hey all,

I’m a bit new to the Julia ecosystem and had a couple questions that some searching around didn’t answer:

  1. Is there a way to use the GPU to sample for Turing.jl models? And if there isn’t, is there a way to try to use a GPU to sample from probability distributions in general?
  2. Is there a source of documentation on ways to distribute out sampling to threads? Right now I’m running the multi-threaded chain sampler which is sampling independently on each thread, but I was wondering if there was more fine-grained parallelism I could get.

Thanks

Have a look at this reply

2 Likes