I pulled an example out. In this case, it looks like lufact!
is the only one correct on BigFloat:
A = 92.317*eye(4)
b = [0.0970454, 0.00944241, 0.167562, 0.08518]
println("True solution")
println(b/92.317)
lufact!(A)
println("lufact")
println(A\b)
A = 92.317*eye(4)
qrfact!(A)
println("qrfact")
println(A\b)
A = 92.317*eye(4)
svdfact!(A)
println("svdfact")
println(A\b)
Abig = big.(A)
bbig = big.(b)
lufact!(Abig)
println("lufact bigfloat")
println(Float64.(Abig\bbig))
Abig = big.(A)
qrfact!(Abig)
println("qrfact bigfloat")
println(Float64.(Abig\bbig))
Abig = big.(A)
using GenericSVD
svdfact!(Abig)
println("generic svdfact bigfloat")
println(Float64.(Abig\bbig))
This prints out the following:
True solution
[0.00105122, 0.000102282, 0.00181507, 0.00092269]
lufact
[0.00105122, 0.000102282, 0.00181507, 0.00092269]
qrfact
[0.00105122, 0.000102282, 0.00181507, 0.00092269]
svdfact
[0.00105122, 0.000102282, 0.00181507, 0.00092269]
lufact bigfloat
[0.00105122, 0.000102282, 0.00181507, 0.00092269]
qrfact bigfloat
[-0.00105122, -0.000102282, -0.00181507, 0.00092269]
generic svdfact bigfloat
[0.0485227, 0.00472121, 0.083781, 0.04259]
Now I’m getting really confused. There’s definitely something odd in GenericSVD though.