I realize that I should just do one bulk reply rather than individual replies. Used to chat things more than forums.
This could be. I agree that the modularity is part of the ethos of Julia. As I think on this more, I think most of my concerns are in the “ability to seamlessly work with and translate between different spatial objects.” Because like, if Terra were split into SpatVect, SpatRast, and SpatTranslate (that translates between vector and rast objects) I would still call it a great suite of packages to learn and use. I’ll do some more digging on my own to see what current solutions there are to this issue in JuliaGeo. All this to say, I think it’s completely okay to have the individual modules, but have wrappers for core Geo types. I mean, R people called sf and sp for over a decade, similar happened in Python. It’s just part of ecosystem development.
I’ll be honest, yesterday I was sort of shocked there was a strong interest for native Julia I/O. In my head, I binned GDAL as “cpp library that’s been optimized for decades and would be standard for Geocomputing, sort of like BLAS.” Now as I think on this with hindsight and not understanding the whole history, GDAL, was, in a way, designed for functionality within interpreted languages. That is something that pure C++ or Julia does not need to account for. As a result, modified in place loop operations would be faster.
I could see the advantage of Julia I/O though. I know I’ve tried diving into Terra’s source code and I learned it is shockingly type restrictive and uses some WEIRD data structures to address that.
Reading these benchmarks and stuff shared, I’m stunned by the performance. Admittedly it’s not too shocking in hindsight considering I myself have taken R code and cut it by 90% in specific use cases, even when using compiled libraries. This is a SIGNIFICANT factor that I value a lot and it’s good to see.
That’s one weird thing I’ve found with Julia, it’s composability is amazing, but knowing how to get that can be tricky. That’s why stuff like GI and GFT can be amazing, but simultaneously feel limited. A part of me wants to just dive into tons of tests and examples and spend time writing documentation and guides on how to compose these types.
It does seem as if this is a very limited pool of contributors, which is to be expected for most OSS projects. I agree that strategy will be beneficial. Essentially, bite off the important pieces first, then migrate to the more optimal stuff in time.
I’d wager that most scientific analysts I know are backwards learners too. They will use their tried and true method until it cannot do what they want, then they either look elsewhere, or build their own tool. Literate programming will be essential, imo, for adoption. To be fair, I am very biased because I see Julia simply as “better R, so let’s PLEASE move to Julia.” Not 100% true, but without adoption that is 100% achievable.
Honestly reading the ecosystem differences, the competing organizations, it might be a good investment to just learn them all, find the benefits of each, and be adaptable in this process. And for science, maybe a core that you use that you’re comfortable with, but outside that explore and contribute where you think you can.
I’m very glad I asked this Q, seems like it’s moving discussions forward, plus it’s giving me inspiration.