Will unprecised float printing affect my optimization solving time?

I have a JuMP model and when I show my objective function, i.e.:

@show objective_function(model)
objective_function(model) = -0.17258400000000138 x[1,1] - 0.03807000000000031 x[2,1] - 0.07106400000000057 x[3,1] - 0.07360200000000058 x[4,1] - 0.9822060000000078 x[5,1] - 0.07106400000000057 x[6,1] - 0.11674800000000093 x[7,1] - 0.10913400000000087 x[8,1] - 0.025380000000000204 x[9,1] - 0.28171800000000224 x[10,1] - 0.10152000000000082 x[11,1] - 0.05837400000000047 x[12,1] - 0.030456000000000243 x[13,1] - 0.035532000000000286 x[14,1] - 0.035532000000000286 x[15,1] - 0.16497000000000128 x[16,1] - 0.17004600000000136 x[17,1] - 0.126900000000001 x[18,1] - 0.022842000000000178 x[19,1] - 0.0888300000000007 x[20,1] - 0.03299400000000026 x[21,1] - 0.03807000000000031 x[22,1] - 0.1776600000000014 x[23,1] - 0.035532000000000286 x[24,1] - 0.1776600000000014 x[25,1] - 0.0634500000000005 x[26,1] - 0.18019800000000144 x[27,1] - 0.030456000000000243 x[28,1] - 0.022842000000000178 x[29,1] - 0.06598800000000052 x[30,1] - [[...521124 terms omitted...]] + 211836.24000000002 y[337] + 27917.819999999996 y[338] + 57334.20000000001 y[339] + 27735.120000000003 y[340] + 25880.819999999996 y[341] + 41375.67 y[342] + 40612.530000000006 y[343] + 119816.54999999997 y[344] + 130246.62000000001 y[345] + 69785.09999999999 y[346] + 24123.959999999995 y[347] + 22560.510000000002 y[348] + 35610.12 y[349] + 62765.219999999994 y[350] + 47253.78 y[351] + 67898.46 y[352] + 25748.519999999997 y[353] + 19944.329999999998 y[354] + 22876.350000000002 y[355] + 23380.56 y[356] + 24405.15 y[357] + 20922.93 y[358] + 21087.36 y[359] + 18002.88 y[360] + 17254.440000000002 y[361] + 24359.58 y[362] + 26710.109999999997 y[363] + 24369.45 y[364] + 25466.699999999997 y[3

Some of the coefficients have imprecise float. For instance, first term should be:

-0.172584 x[1,1] while it is displayed as -0.17258400000000138 x[1,1]

I believe this is due to the fact that I am reading data from CSV files then multiplying floats together, doing computations before obtaining my objective’s coefficients.

Does this affect my JuMP model solving time and performances and is that something I should care of, maybe be changing coefficients types or is that not a matter and only display and cosmetic?

It doesn’t matter (as long as you don’t have redundant constraints with close but not quite the same numbers)

I believe this is due to the fact that I am reading data from CSV files then multiplying floats together, doing computations before obtaining my objective’s coefficients.

Yes, this is the cause. I don’t know what exact numbers you used, but it should be expected:

julia> x = 0.172854
0.172854

julia> x * 10 * 0.1
0.17285400000000004

Does this affect my JuMP model solving time and performances and is that something I should care of, maybe be changing coefficients types or is that not a matter and only display and cosmetic?

In most cases, no. It is just cosmetic.

Where it is usually a problem is going the other way, where you truncate data by deleting some of the coefficients. Gurobi has a good article on this (and many other topics):