I’m a bit new to the Julia ecosystem and had a couple questions that some searching around didn’t answer:
- Is there a way to use the GPU to sample for Turing.jl models? And if there isn’t, is there a way to try to use a GPU to sample from probability distributions in general?
- Is there a source of documentation on ways to distribute out sampling to threads? Right now I’m running the multi-threaded chain sampler which is sampling independently on each thread, but I was wondering if there was more fine-grained parallelism I could get.