As far as I can tell, this isn’t a missing Julia feature, but a missing package that anyone interested could implement. I don’t mean to say that it’s easy so anyone could do it, but that this simply isn’t a Julia-the-language problem.
The PR from StaticArrays.jl could perhaps be a starting point?
If programming languages had all the desired functionality for everyone built-in, then it wouldn’t be necessary to make packages, right?
Frankly, the solution to me is not promotion and marketing by the company, but those of us using it to make mention of it in publications and presentations everytime. Also the indexes that people refer to that rank the popularity of the language rely on mentions on third party platform, Quora and Stack Overflow. Julia has this Discourse site that has a lot of activity that I do not believe is counted in the popularity indexes. Rightly or wrongly, someone looking for a new programming language to develop in are generally going to stop in the top 10 or top 20 at the outside. Until Julia, which is making inroads on these indices, can crack the top 20 or 15, Julia is a back burner programming langauge.
My (very limited) experience of parallelism in Julia is that badly written code tends to segfault, and segfaults in Julia are beyond my ability to debug, because you can’t just feed them into a debugger, as segfaults are in the JIT-generated code. This is (slightly) worse than C/C++/Rust, because at least I can drop into gdb and get backtraces/etc.
Python avoids this problem by simply not doing language-level parallelism at all (because of the GIL), but one of my hopes with moving to Julia was easier parallelism without dropping to C and friends. I don’t have experience with Matlab and R.
I suspect life in Julia is much better if you use libraries, rather than creating threads yourself, but then if I’m only going to use library code rather than create my own threads, I can often get by using Python libraries already.
I have not run into this, but it could be that my code is too excellent.
Or well, I’ve had my fair share of segfaults when using unsafe_* functions or ill-advised @inbounds, as well as deadlocks and race conditions when using threads. Segmentation faults from threading is outside my experience though so it would be interesting to see a (preferably small) example when that happens. It would probably be better as a separate post with a link from this thread.
I do not know what you mean by static bitfields. If you are using arbitrarily large bitsequences, skip this.
If you mean bitcontiguous subfields held within a larger bitstype for fast stack allocation and utilization … BitsFields.jl is designed for that.
If you check out the stackoverflow developer survey, there is good growth for Julia each year.
Here is my rationale for starting to learn Julia last week -
Actuarial science is a field with both high performance computing (for running actuarial models) and data science (for calculating model assumptions using data). Julia will allow actuaries to use a single language for all of their work. High performance computing in R or Python won’t be fun. Developing actuarial assumptions in C++ won’t be fun. I think using Julia will be fun for both tasks.
Working with multidimensional arrays seems much better in Julia than Python. I like being able to write loops. I think Tullio is way better than np.einsum. I was a frustrated Python user that was trying to use NumPy/JAX for high performance actuarial computing that worked on CPU/GPU. I am much happier writing Julia at the moment, even though the tooling is not what I am used to.
If you need all of the following things then Julia is great
exploratory data analysis, data science
flexible high performance computing.
If you don’t really need all of these things, like if you only do deep learning, or only do exploratory data analysis and data science, or only do high performance computing… you might be happy where you are at.
I actually enjoy writing in the language. It is well thought out and not overly verbose.
It runs fast. My “prototype” code for research runs just as fast as our production c# code, and scales over multiple threads just as well. I have also toyed with distributing over multiple computers which our c# doesn’t do yet. I couldn’t believe how straightforward it was to convert my algorithms to take advantage of multiple threads, and then to distribute over multiple computers.
How effective it is to write maintainable code using Multiple Dispatch.