I’ve been posting several questions related to this, but basically, the planning is still incomplete. I’m thinking that the planning will never be complete if I only keep on planning, so maybe I should do what I could do first, and then plan from there. However, there are two main hurdles in the current plan, that will make my implementation dependent on them.
I’m planning to make KineOu (sovereign of movement), an entity component system (ECS) library that’s blazingly fast. However, there are two problems that will need to be fixed for the library to be complete.
Garbage collection needs to be able to garbage-collect unused compiled code. (Eg: I dynamically compile behavior for when a zombie is on fire and poisoned, and then that type of zombie no longer exists, the compiled code will still linger at the present moment.) Currently, the Julia devs say that fixing this will be very hard (not that they don’t want to fix it, but likely not happening in the near term). So, if I could release it now, it would still leak memory until the fix happens.
Loopvectorization update: Currently, Loopvectorization is deprecated and its successor is still not out. I would likely be relying on that for maximum performance.
3+) As I add more features, I might depend on even more features not currently implemented, in fields possibly including, but not limited to: GPU programming, differential equation solving, collision detection, etc…
Should I wait for the ecosystem to be ready? Or should I go ahead and implement it right now? Should I go ahead and trust that they can and will get fixed? If I go ahead now, there is a risk I will run into problems I can’t fix. On the other hand, if I got my package running and got 100+ stars, my problems would likely no longer be cheaply worded requests, but issues needed to get a major package working, which would of course have more weight, and if I get much better and/or gain team support in the long run, there is a chance I or my team could go ahead and fix these issues.
This project, even though it has not even started yet, made me realize how much I am standing on the shoulders of giants.
This has been suggested to you several times before but you should not be putting your hopes so large to start with. Just program for yourself before imagining you will have the best product right away.
Why not just work incrementally on developing what you want and go from there? If you try and spend time planning to prevent anything from being less than 100% perfect to begin with, with no redesigns ever necessary, then nothing will ever get done.
The two obstacles you mention (GC, LoopVectorization) are large and I doubt those are truly 100% necessary to get what you want. I doubt you can even really properly assess their value until you have actually developed anything. It just sounds like premature optimisation.
I agree! Start out prototyping things, reflect on how it is (what is good, what is missing, what needs (not can!) to be improved?), then go from there and iterate.
Development is best done stepwise and best with some kind of feedback mechanism involved (often helpful if it comes from external).
These two are almost certainly needed for the perfect product, and actually, one is needed to make it in what is, in my opinion, the simplest way (leveraging dynamic compilation, which is possible due to how easy it is to dynamically compile code in Julia). As said, I could make a library that slowly leaks memory over hours. Without loopvectorization, I could just cross my fingers for now and decorate the loop with a simple simd macro, which is less than ideal but can work temporarily.
The point being that you can most likely get close enough without these tools - hence my “100%” in that quote. I highly doubt you can truly assess the benefit of this without even a working prototype. If you truly want to make the product the only way is to actually make the product and go from there. No one can use something that doesn’t exist yet.
I doubt most Julia (or in any language) packages were built completely perfect the first time. I don’t think the developers of e.g. Makie.jl or DifferentialEquations.jl (the whole SciML ecosystem really) would crucify me () if I were to assume that, in their first iteration, they probably weren’t even close to 100% perfect. Overtime they developed and became extremely important tools that people use and appreciate daily (and even then, they aren’t perfect - nothing is). There are other packages that could be given as examples but those are my two most used.
As another example, one of my more useful packages is in an OK state currently with ~70 stars (just mentioning stars since you mention it as a metric - I wouldn’t suggest you focus on this metric so hard to begin with), but I wouldn’t say it’s that close to perfect and I’m still more than happy with it. I wouldn’t have gotten that far with it if I truly wanted it to be 100% perfect in the first iteration and, as a developer, I am quite proud of it. Even if nobody but me ever used the package I would probably be fine with the outcome since I have learned a lot in developing it (and made many mistakes along the way).
You should not let these obstacles burden you from doing what you want to do and do it for your own satisfaction. Maybe you will find better ways or find that they are not even that much of an obstacle.
I used to write a symbolic regression code that made use of dynamic compilation, with presumably much more frequent compilation than my ECS, and still, it took several hours before that would crash. I think my ECS would run fine unless you run it for more than 10 hours straight.
If it’s truly a roadblock then you probably just have a poor design (which, again, would be a lot clearer if you actually had any working prototype for yourself to iterate on).
I don’t think you are truly trying to take this advice or the other feedback from others in your previous threads so I’ll stop here.
I think I could start now… thanks. Now, the next problem is finding free, motivated time to start implementing my idea, which could indeed be a big issue.
It’s been a difficult time for me, coming from a world where I would often do big-O analysis and/or other kind of mental analysis before writing code. Here, oftentimes it’s just trying things out, and I’ve had some experiences with failure where I wrote like 500 lines of code and it failed or didn’t end up as good as I expected.
Well no Makie was of course perfect from the start!
Here’s some old recordings from when I started adding the layout system.
One of my first proof-of-concepts looked like this, with an epilepsy inducing updating bug in GLMakie:
and then after a couple months it started looking like something
and after that the endless stream of bugfixes and feature requests really only began. So yeah I wouldn’t advise trying to solve everything at once, you always learn things along the way.
Or sometimes you start with something good and then ruin it. See Twitter / X. You never know which direction you’re walking, but if you aren’t walking then you can be sure you aren’t improving.
Throughout my life, writing code is quite a significant mental commitment. To see that I wrote like 1000 lines of code only to fail is discouraging, which is why I developed quite a defensive mental system where I would think a lot and resolve potential issues before I code.
The problem is the concept of “unused”. If you allocate a massive array and assign it to a live reference or cache it in an object with a live reference, the garbage collector will never touch the array even if you never use it. Only you know what you won’t use, so GC looks for unreachable instead. This is how memory leaks still happen under ideal GC.
Compiled code CAN become unreachable via method invalidation. I don’t know enough about Julia’s implementation to say what happens to obsolete compiled code, but once you return to the global scope and the world ages line up, I don’t think there’s a reason to keep it around and I have assumed it was GCed. I do think it’d be interesting for a JIT-compiled language to do something similar for temporary functions, but even closures and anonymous functions are implemented with globally scoped types and their method tables, and those references live forever. Only closures’ captured data could be GCed when there are no more live references.
That said, ECSs have been implemented in AOT-compiled languages, so JIT+GCed compilation doesn’t seem necessary.
And it does in the case of shader compilation, but it’s the only decent approach when a game has to work on a variety of hardware and the shader cache isn’t big enough for everything.
I don’t think lines of code is a helpful metric for quantifying effort. Writing out the code is the easy part, figuring out the design is the hard part. By the time you have a decently working implementation you will probably have thrown away many more lines of code than the number of lines that remain. It’s not uncommon that the number of lines goes down as you iterate your way from a half-baked/non-functional attempt to something OK.
Some programmers advocate writing everything twice, not in the common sense of not caring about the DRY principle, but literally, as in, for every unit of development (new feature/bugfix/etc.):
solve the problem once
stash that code away on a backup branch
solve it again, from scratch
From the link:
Rewriting the solution only took 25% the time as the initial implementation, and the result was much better.
All that to say: don’t despair if you find yourself throwing away 1000 lines of code that didn’t work. That effort was a necessary and significant step towards a working solution.
That issue isn’t relevant to method invalidation. It’s about evaluating duplicate anonymous functions in the global scope, which I mentioned is implemented as separate globally scoped types whose references live forever. at first glance, the more recent comments do talk about method overwriting and the difficulty of making older world ages unreachable. Silver lining is compiled code can be removed even if the method isn’t, and Base.invoke_in_world being internal gives them leeway for bigger changes.
You’d typically be correct, but here, it refers to something that would be run not once or twice but like 60 times per entity per second. This meant that the compilation time becomes irrelevant pretty fast.
where I would think a lot and resolve potential issues before I code.
That’s great, it describes quite well the way I do development. But that’s what I do between coding sessions. The best advice I received before I started graduate work was
the best place to start from is where you are
Don’t wait for other people, do what you can do, and learn as you go.
@Tarny_GG_Channie I offer a change of perspective: If you have written 1000 LOC and then see that something does not work out, then you don’t have wasted time. Instead you gained an insight that required you to write 1000 LOC and that is a big step forward on the path to your goal.
Progress can appear to be very non-linear when you try to quantify by objective measures. In particular theoretical problems (such as designing a performant framework) may appear much simpler once you understood how to solve them. A personal example: I could probably reproduce the core computations of my PhD thesis in theoretical quantum physics in a day or so. But that doesn’t make the computation trivial and in fact it was quite difficult to get the result the first time and I made a lot of errors along the way. But everytime I went the wrong way, I gained new insight into the problem. Every error was progress in a sense.
So I am with the others in this thread: Just start! Work towards your goal! Along the way you will encounter many things that don’t work (as well as they should) and by figuring out way to surpass these you will come closer to your goal. Perhaps you will also inspire other people to join in on your efforts