So what exactly is the issue you’re seeing? Note that by default, the DataFrame output will show the “short” representation of Float64s, so maybe it just seems like there’s a rounding problem? For example:
julia> df = CSV.read("/Users/jacobquinn/Downloads/decimals.csv")
19×1 DataFrames.DataFrame
│ Row │ csv │
│ │ Float64 │
├─────┼──────────┤
│ 1 │ -5.82697 │
│ 2 │ 16.0226 │
│ 3 │ 9.29517 │
│ 4 │ 16.0 │
│ 5 │ 221.891 │
│ 6 │ 12.7143 │
│ 7 │ 14.0344 │
│ 8 │ 4.51015 │
│ 9 │ 4.28868 │
│ 10 │ 68.4703 │
│ 11 │ 93.0806 │
│ 12 │ 3.7603 │
│ 13 │ 3.92983 │
│ 14 │ 6.65744 │
│ 15 │ 5.84855 │
│ 16 │ 44.4667 │
│ 17 │ 66.6438 │
│ 18 │ 74.6098 │
│ 19 │ 529.389 │
julia> df[1]
19-element CSV.Column{Float64,Float64}:
-5.826973229
16.0226433152842
9.29517069756561
16.0000465481671
221.890958251314
12.7143060136525
14.0344227255913
4.51014969063621
4.28867745292715
68.4703262835639
93.0806006548047
3.76030285138141
3.92982841628694
6.65743838672069
5.84854811215371
44.466724629254
66.6437870206549
74.6097502012099
529.388621939906
Notice how looking at the raw Array values show the full precision of the Float64s.