Thank you everyone for supporting me all along. Now, I know what to do.
I’ve mastered algorithms to be pretty good at it in like 200 problems solved on hackerrank and other websites. Now, it’s time for me to do the same with pull requests.
Let me get it straight here. My code quality might not be very clean. I might consume a lot of reviewers’ time, and my pull request success rate might be low, but I will accept the opportunity if given a chance.
I’m willing to implement small new functionality, optimize the code, add new specialized methods, or fix other issues you find.
200 pull requests. Not sure if I would be able to keep going, but if I don’t get started, I miss all the shot I don’t take.
I’m tired of having a bunch of ideas that I can’t find a way to execute, and I’m pretty sure the Julia community is just as enthusiastic to fix issues whenever they arise, and even go to crazy length to fix it.
Optimizing stuff for performance would be my favorite. Should be something doable by someone with understanding in algorithms and some basic math (calculus level). Maybe autograd would be okay if it doesn’t involve heavy computation. Implementing machine learning stuff like neural network layers/etc would be good too. Still, things that someone with CS101 knowledge and perhaps some data science knowledge can quickly onboard would be pretty good.
I made my first Stockfish patch, Stockfish itself being quite a complicated program, and I was encouraged that I could do it, and I could. So, maybe what I need is some package that’s open for people to contribute and for some developers to encourage me to go in. I learn pretty quickly so should be fine with a lot of things where the missing knowledge can be filled quickly and doesn’t take years learning the basics to even get started (like finite element analysis, though maybe someone working on FEM might prove me wrong on that as well.).
Lux is trying to moove a lot of layers to Reactant, there is an issue following it in Lux.jl.
Also, personal project that could help make julia shine is moving Transformers.jl to Lux.jl + Reactant.jl but you will need a good enough cuda gpu.
As for Optimisation it’s still focused on Optimization.jl and JuMP.jl
The underlying data structure for the sorted containers in DataStructures.jl is a balanced tree data structure called a 2-3 tree. A 2-3 tree is not optimized for memory hierarchies, which therefore hurts the performance of the sorted containers. Balanced-tree data structures optimized for memory hierarchy are known in the literature but are much more complicated than 2-3 trees. If you want to take a shot at this, the following paper seems to be relevant:
Bender, Michael A., Erik D. Demaine, and Martin Farach-Colton. “Cache-oblivious B-trees.” SIAM Journal on Computing 35.2 (2005): 341-358.
I’ve been meaning to work on this for years but never get to it.
specifically, we don’t have much Statistics.jl and Combinatorics.jl kernels for GPU
(comment: for combinatorics stuff, I think we want to target physical layout corresponds to ArraysOfArrays.jl eventually because you want contiguous layout for GPU)
If you are interested in fast combinatorial functions, then SmallCombinatorics.jl may be relevant. The (few) functions it currently has are several orders of magnitude faster than their counterparts in Combinatorics.jl. (Disclaimer: I’m not familiar with kernels.)