I’m trying to run the file found here: https://github.com/FluxML/model-zoo/blob/master/vision/vae_mnist/vae_mnist.jl on newer versions of Flux, CUDA.
CUDA = v3.4.2
Flux = v0.12.6
Do you know what could be causing errors with getting the loss/gradient? It seems odd that this step is broken.
When I run the example with the latest versions of Flux, CUDA on a gpu, I get the error “ERROR: this intrinsic must be compiled to be called” when the model loss is being calculated.
When I run the example on the cpu, the loss diverges.
Edit:
Updating Zygote to the latest version on github fixes the GPU error, but the loss still diverges.