# How to prevent Inf or NaN at low floating number precision

Hi

I am programming at Float32 to improve the speed. In my calculations, however, Inf or NaNs occur occasionally due to the use of `exp.(x)`. For example `x=90.30891f0; exp(x)` will yield Inf32, even if x is fine.

I was wondering if there is a way to prevent this behaviour, without increase the floating number precision? In general, I have no idea where these values might occur.

Thank you very much!

I meanâ€¦

``````julia> prevfloat(typemax(Float32))
3.4028235f38

julia> exp(90.30891)
1.6621158068743495e39
``````

`x` is fine but `exp(x)` is simply outside of `Float32`'s range, nothing Julia can do here sorry

2 Likes

Thank you. Maybe Iâ€™ll go back to Float64â€¦
Just a bit curious why Pytorch use Float32 as default, since using exp at Float32 can easily cause an overflow.

speed

because ML typically donâ€™t need Float64 as their â€śweightsâ€ť and their data are â€śnormalizedâ€ť

1 Like

If you need exp(x) as an intermediate value but youâ€™re going to mix it with another calculation, then there are ways to do the entire calculation in a more stable way.

An example `exp(x)/exp(y) = exp(x-y)` and similar things. This is pretty standard numerical methods stuff. If on the other hand you just need to output the value of exp(x) for a large x, then youâ€™ll have to switch to a higher precision. For example `exp(convert(Float64,x))`

You can work with Float32 but convert it before applying exp or other functions that grow large.

2 Likes