Has anyone noticed anything odd with `lufact`

on `BigFloat`

s? Is there some theoretical weird instability thing I should know about? I am wondering because as I am testing some algorithms I am noticing a pattern that the same methods (ODE solvers) with the same coefficients (some methods are only specified to `Float64`

precision) suddenly fail if the coefficients are changed to `BigFloat`

s. But I can “save” the methods if I change the linear solve from `lufact`

to `qrfact`

.

This is odd and is against my intuition: why would the same numbers give a bad `lufact`

when converted to `BigFloat`

but not as `Float64`

s? If there’s no reasoning for this and it’s likely a problem with the generic fallback I’ll dig into this more and make an MWE. I just wanted to ask first because it might take a bit to pull out an actual example which isn’t integrated with everything else, but I have many different tests showing that it’s only `lufact`

. I know `qrfact`

is more stable, but this seems bizarre to me.