I am programming at Float32 to improve the speed. In my calculations, however, Inf or NaNs occur occasionally due to the use of exp.(x). For example x=90.30891f0; exp(x) will yield Inf32, even if x is fine.

I was wondering if there is a way to prevent this behaviour, without increase the floating number precision? In general, I have no idea where these values might occur.

Thank you. Maybe Iâ€™ll go back to Float64â€¦
Just a bit curious why Pytorch use Float32 as default, since using exp at Float32 can easily cause an overflow.

If you need exp(x) as an intermediate value but youâ€™re going to mix it with another calculation, then there are ways to do the entire calculation in a more stable way.

An example exp(x)/exp(y) = exp(x-y) and similar things. This is pretty standard numerical methods stuff. If on the other hand you just need to output the value of exp(x) for a large x, then youâ€™ll have to switch to a higher precision. For example exp(convert(Float64,x))

You can work with Float32 but convert it before applying exp or other functions that grow large.