It seems like people here in this conversation believe that Julia is really quite difficult and annoying to use for reading in data sets, transforming them, plotting them, and then fitting 5-20 different models on those datasets. I find this bizarre, because it’s what I literally just did for the last few weeks and it couldn’t have been more enjoyable.
Are there a few warts? Yes, for example there was some situation where by reading in all the census ACS microdata for 5 years I wound up hitting a bug/issue in memory management. My actual datasets were like 200MB after subsampling, but Julia was taking up 10GB due to memory fragmentation or something. But I worked around it in a few minutes and I’ve had to work around other issues in every language I’ve ever used, so I don’t feel like it was different from just the general issue of working around bugs in any language.
But, at its core, Julia is a JOY to use because it’s fast and enables me to do things that just couldn’t be done in R, and most likely also couldn’t be done in STATA though I admit to having zero experience with STATA. Still, I imagine it’d be hair-pulling to for example write a spatial Agent Based model or a differential equations model in STATA.
But even when I’m doing things that 100% could be done in R or STATA, it’s just clean and well thought out in Julia over for example all the BS in R/Tidyverse with its nonstandard evaluation.
TTFX has 100% not been an issue for me, even when I’m not using my custom made sysimage with all the plotting, CSV, DataFrames, etc stuff included.
I mean, to each his own I guess, but seriously, I’m going to start a new thread for head-to-head comparisons of Julia vs X for data analysis… I think we need to dispel some of these myths. The difference between Julia today and Julia in 2019 even is huge, and that’s not true for R, STATA, or Matlab so I feel like people have an outdated idea of how annoying it is using Julia as of this actual moment.