COBOL is still one of the more used languages (just not so much talked about), as are other ~50 year old languages such Fortran, C and MUMPS.
It may be decades before those fixed width data formats actually disappear, and CSV is not really that great for a data format either (there’s no real standard, just some RFCs, with tons of exceptions in practice).
Most sane data providers are not going to muck around with the output format of some old workhorse COBOL or Fortran code, just to make it easier to deal with in “new” languages.
From my point of view UTF-8 is also from another era than fixed width data files.
Column autodetection could be very useful for semi-manual data analysis! I would like to see your package too!
But autodetection (if it is not very clever AI) is not suitable for production environment in industry (government, bank etc) for regularly and massively imported data from “external” sources.
Maybe it would be helpful for @RandomString123 to see your code… (But I fully respect if you don’t want to show it before publish package! )
If you for example read data in CP-1250 and write it to DB with LATIN2 encoding you don’t need to put intermediate data to UTF-8 because it would spoil performance. (here is very probably place for other string implementation than current Base.String/Unicode.String
)
Also you don’t want to show UTF-8 encoding error if you in process could not convert euro sign (CP1250 has it and LATIN2 don’t) because in this case your users are not dealing with UTF-8 at all.