As I predicted, now you are just reinventing the wheel and create a new distro within a distro that is completely incompatible with the host system and can’t be used to interact with the system at all.
There’s little problem with however content is managed or things is compiled, as long as each libraries can still each other easily (i.e. not having some restrictions like flatpak) it would be a good package manager.
However, this is something that should be only one per system, instead of having one for each programming environment. If each environment (python, julia, whatever …) all have their own library to use (edit: and effectively require one to only use those), then there’s no way to use them with each other. This is what, as I said before, makes such assumption is very selfish.
Compared to, say python, where AFAICT, installing a package using prebuilt binary is supported but integrating it with other system is eqaually well supported, isn’t treated as second class citizen and it’s support isn’t constantly being deliberately broken.
this is nice, sometimes on a “badly managed” cluster, it’s hard to install some software without the support of IT (especially for an extended period of use, and in a reproduciable mannar).
(also lowkey highlight fzf
, happy to see people enjoy using it as I do when I added it)
Yes, I agree having a user package manager is nice when the system one is not usable (no root access, windows, etc). Even just msys2 already saved me so much time on lab windows computers = = … But it is also harmful when there is a usable and working system one.
And even in the case where a user package manager is useful, it should be environment/language neutral. Each library/app/language can certainly implement support/integration with any of these, it’s only the assumption that this integration has to exist and the breakage of other setup that’s bad.
Ultimately users have to decide what’s the best setup for their context (in terms of usability, maintenance period of time etc.)
We’ve all been yelled by the system pkg manager because we pip
ed some pkgs that system happens to rely on, this is why people invented virtual environment I guess.
Same thing happens with binary softwares. This is such a big problem that different detector groups on the same collider need to maintain monstrous software environments [1] [2] just to babysit users. This creates HUGE headaches when some new software you need has collision with the slowly evolving bigger environment.
So to me since Julia has solved binary dependency for its pkgs’ use, it’s fine to expose those binaries to users, giving them more choices.
Not to mention that Windows users are “forced” to use cross-compile builds instead of the much smaller VS binaries (when created ofc) and lots of files that are not need at all (e.g. *.a, *.h, etc)
Why? What harm does having more than one cause?
Again, who cares? Slightly bigger binaries
This is a very opinionated take (I’m an opinionated kind of guy): OSes have simply failed at providing for robust, reproducible installation of software. Worse: the brittle, irreproducible software installation they do support is wildly different on each platform, even across Linux distros. We tried using native package installers on each OS. It did not work. Moreover, Python, Ruby, R, etc have all been trying to do this for decades and also failed. This approach simply does not work. It is a dead parrot.
So we have taken the matter into our own hands and in a few short years succeeded far better than any OS besides NixOs (and more portably than NixOs), at delivering people fully working binary stacks in a way that is fast, reliable, portable and reproducible.
Moreover, there is a mechanism for letting people use system binaries if they want to. It’s called “overrides”. It requires some tooling work to be done in order for it to be very useful, but the basic functionality has been there from the first artifacts release. But no one has done any of that tooling work. Guess what? It’s open source. Someone who cares about JLL overrides needs to implement that tooling.
Anyway all this complaining about how Pkg works (emphasis on the word “works” because it does actually work) is misplaced because this thread is about ygg
which you can simply choose not to use if you don’t want to. So I’m splitting it into a separate thread where the complaining can continue until morale improves.
LOVE this!
I am honestly not sure what the problem would be with every language doing this.
It’s not hard drive space:
We are down to about of 2 cents per GB on a hard drive.
And about 25 cents per GB on a SSD.
I mean it just doesn’t seem like I’ma problem at all.
Years ago worked out my window machine had 5 seperate complete installs of Perl.
I don’t think I had ever even programmed Perl on that computer.
It just was bundled with things to run some of their utilities.
Was it a problem? Not at all.
A few months back I removed 6 seperate installs of python from my Mac.
Not virtual environments: totally seperate installs, because I was not really paying attention when using different package managers.
Was that a problem?
Only because I suck at managing my PATH.
That one is on me.
I legit don’t understand the problem with Julia maintaining an awesome collection of cross compiled software that can be installed into the user’s home directory.
It was readily apparent to anyone that this was where it was going.
It’s not like it was a deep revelation of some dark secret.
The whole process of the binary builder ecosystem looks a lot like
http://www.linuxfromscratch.org/
I’ve a couple of questions:
- Who is in charge of keeping up with security and updating packages?
- How can I see how each package is built and what patches are applied?
- What happens when two different packages require two different, incompatible versions of the same binary?
All the build scripts are contributed by the community and hosted in the Yggdrasil repository. Packages critical to Julia base infrastructure are primarily maintained by the Julia core team, so security issues in those should get fixed in a timely manner. Otherwise, use at your own risk of course.
JLL packages are just regular Julia packages, so having different versions in different environments should not be a problem. I don’t think it’s possible to have different versions in the same environment, but dynamically linking different versions of the same library will likely cause problems anyways, not just in Julia.
- What happens if I want to debug a particular package using VS?
This is probably the most interesting part about the JLL system. As a developer, I look at the system and am like, won’t people need this? But we’ve been using it for now like what, 2 years? I haven’t seen a single user actually request it. Users requested information about it in BinDeps land all of the time, so what gives? What seems to have happened is that downloading pre-built binaries has worked so seamlessly that many users now don’t even know how many binaries they are using, so they don’t even question how to improve it. It even installs CUDA for users who don’t have it, even from R. It “just works”.
So I echo @oxinabox : hard drive space is much cheaper than man hours. Giving someone a binary that actually works is pretty great. Linux distributions don’t tend to do that, but Ygg does. I really didn’t like BinDeps, Python Conda deps, etc. but I’ve had an overall positive experience with Ygg.
Thanks, Simeon. I am amazed (and thankful) by the amount of work and dedication that has been put into this, but endlessly frustrated that it was necessary in the first place.
how else can this possibly be solved? It’s not like Python doesn’t do it.
It’s simply a fact now that if a pkg pyX
need software X with a version that is not compatible with your system (for many good reason, e.g , pyX
is used in a production system or a lab where updates happen much slower due to compatibility).
(I want to point our this even happens with your “system software” which is why Ubuntu initiated the much hated Snap store. I don’t like it because it bloats your system, but clearly this is a very common problem in any environment.
Another solution is to always live on the edge and use ~newest versions of ALL the upstream dependencies, which is what distros like Arch picked. But this is almost impossible in big production environment or lab that heavily depends on legacy software)
Julia does this much more consistently than python because _jll
are managed / dependency-resolved just like regular julia package.
Yeah. Apple takes a 200% margin on storage upgrades. They’re basically just taking a standard nvme SSD and upcharging you a ton. For comparison, Dell only charges $225 for a 1TB upgrade (from 256gb base).
I have a Dell too but my point is: why do the JLLs install tons of files that are not needed at all (and amount to GB of disk space + slowness when Windows is indexing them)? All that is needed is a .dll/.so but a Julia with a moderately small number of packages installs 100 thousands of needless files in .julia/artifacts
.
And there is my other point, What if I want to debug an artifact from Windows with VS? I can’t (even assuming the artifact could be built in debug mode) because Windows users have to use what *nix considers good to Windows users.
I know I’m bein unfair with the above because what is available is kindly offered by people who had a lot of work to make it work. But why not promoting JLLs built with Visual Studio as well? Most of them have now CMake solutions that would allow VS binaries for Windows.
If you have a hankering to add support for building MSVC binaries to BinaryBuilder and Yggdrasil, I don’t think anyone would object. If you want to use native libraries, the overrides system exists — one “merely” needs the will to use it. Also, it’s not like this was possible before BinaryBuilder.