DLL problems, episode 2

I certainly didn’t mean to imply we should go back to depending on system libraries! I think the new binary builder project is a good start towards a better arrangement, but haven’t seen the issue of inter-package coordination addressed yet in this domain. You have done an astoundingly good job of handling the inter-relations at the level of pure Julia code, where module isolation makes things more practical. I hope there will also be a good solution for DLLs, which are currently managed by the system loader in a racy way. Maybe something like modifying library names would help? I think that’s what Matlab does, for example, to isolate their dependencies.

I’m not entirely clear what the concern is but it is certainly more feasible to allow each package with a library dependency to get its own version since the interfaces of C libraries are “plain data” and therefore it’s hard to think of major issues with that. It seems fairly straightforward to push the flexibility upward to the Julia package layer since Julia code is dynamic and can fairly easily adjust to even fairly major differences between library versions. The brittleness of “DLL hell” comes form very rigid requirements on library versions coming all the way from the top; that kind of rigidness in requirements doesn’t seems to be a typical property of Julia packages—it’s easy enough to code around fairly large differences in compiled library versions. If flat out incompatibilities do somehow become a major problem we can always explicitly build multiple versions that are designed to be loaded together like we do with 32-bit and 64-bit numerical libraries.

1 Like

Another way to look at it is that allowing sufficient flexibility in compiled library dependencies seems like an easier problem than the same for Julia package dependencies—C APIs tend to be very good abstraction boundaries. We haven’t had trouble motivating developers to support a broad enough range of pure Julia dependencies, so it’s hard to imagine why library dependencies would be worse. The expectation that incompatible requirements will be a common and intractable problem seems to come from prior experience with compiled languages and .so/.dll file’s where the lock-in to a particular dependency version is extreme. It’s easy to have a situation where libY-1.2.so will only work with libX-2.3.so and no other versions will work, which causes a massive problem if libZ can only work with libX-2.4.so. There’s no reason to expect the same with Julia packages since it’s easy to write a single version of a Julia package that dynamically looks at the version of libX that it gets and codes around compile-time differences at load time the same way we do when there are API differences in Julia itself or packages (Compat.jl is an extreme example of this).

That’s an example of what I fear: suppose Julia package A needs libY and package B needs libZ; how can an application use both A and B? Doesn’t the resolution for libX always go to the first one loaded?

(The real libz is a special case because the maintainers are very careful about backwards compatibility, so one just picks the latest version.)

The plan is that library versions will be resolved by the package manager the same way that package versions are. It does not, therefore, depend on which package loads a library first. The compatibility issue is not any worse for libraries than for packages—and that has not been a problem so far.

In general I’m confused by why this is a concern now. This has not been a big issue in the Julia ecosystem to date, why would that change?

Hopefully I can add something here. Maybe not. I come from a background in HPC. Being horribly Linux specific, I often see on mailing lists, for instance for OpenMPI or for batch processing systems “I installed package X using apt-get or yum (pick your tool of choice). Please give me help with it”
Oftentimes you find that the distro-supplied version is well out of date, and that the features the person on the mailing list wants are to be found in the latest version, or there are bugs in the older version.
Yes, the distros do a superb job of packaging a mainstream Operating System, For instance I Cant remember the last time I custom built a kernel. However the distros cannot keep up with the pace of technical software libraries, and probably dont have access to the Infiniband / Omnipath hardware needed (they probably do actually)
sO yes I think you should depend on the core OS supplied libraries like glibs, but having a slavish dependency on what, say, Ubuntu ships by default in some LTS release several years old is going to end in disappointment.
I know it is difficut to keep everyting in locked step with OS released packages. I hoped that Pkg3 would help us with all of this.

I’m really confused by the doom and gloom tone of this thread. Pkg3 currently only resolves Julia packages but it still is and has always been the plan to also have binary shared libraries as a first-class entity, versions of which are resolved and recorded in the manifest file in the same manner as Julia packages. That is not yet implemented, but it will be in the future.

BinDeps2 makes it possible to build libraries once and install a pre-built known-good binary on any system instead of having to make build scripts that work on every unique and special snowflake of a system. Currently BinDeps2 and Pkg3 are independent, but in the future, the build artifacts output by the BinDeps2 build system will be first-class “library” dependencies of packages and dependency resolution will handle them just as it currently does packages. This has many benefits:

  1. Frees Julia packages from relying on whatever the system package manager happens to provide, which is usually, as noted by @johnh, old, broken and often misconfigured for Julia’s usage.

  2. It avoids everyone having to build their own versions of these tricky-to-build numerical libraries, allowing a single expert party to get it working in a well-tooled VM setting and distribute reliable, portable (in the sense that there are comparable versions across multiple platforms), reproducible binaries instead.

  3. It allows libraries to take part in version resolution just like packages do, giving maximal flexibility to avoid DLL hell / library lockstep situations.


Doom and gloom gone!

1 Like

With BinaryBuilder each library can incorporate its own “private” copy of libz if needed that does not affect other libraries or use the system libz at all…

The only thing I fear out of this is that maybe the binaries won’t be architecture-specific enough and some performance can be list. We tested this with SundialsBuilder/Sundials.jl and found this to not be the case with this specific library, but I would be surprised if that applies to everything. Then again, maybe the easiest fix is to just allow BinaryBuilder to build a ton more binaries with differing levels of CPU-specific instructions?

1 Like

Few libraries benefit much from CPU-specific instructions, in my experience. Of those that do, many use the cpuid instruction to selectively enable CPU-specific code at runtime…

See e.g. ZMQ.jl for how to optionally enable source builds with BinaryProvider.

1 Like

Thanks, Stefan - that was the kind of plan I was looking for. (Sorry about the alarmism, I’ll ascribe it to a frustrating day.)

Software installation is frustrating—we’re trying to make that frustration go away. Not 100% there yet, but we’re getting there.


I wanted to note that I just ran into the exact issue raised in the OP while trying to update VideoIO.jl.
Loading VideoIO loads ffmpeg (or libav) libraries, which are generally linked to the system libz (e.g., on Travis).

During testing, we load a test image (to compare with the first image of a video), which on most systems uses ImageMagick.jl (through FileIO.jl).

ImageMagick.jl recently switched to using libz provided by BinaryBuilder.
When the libav libraries are loaded first, they bring in the system libz,
and ImageMagick.jl attempts to use the bindings resolved there.
On Travis, that libz is old (as in the OP), and ImageMagick loading fails.

A workaround, for now, is to make sure ImageMagick.jl is loaded first.
This works fine in testing, but we don’t have control over the order in which a user loads things.

I realize that the preferred solution right now is probably to switch to BinaryBuilder for VideoIO.jl (work on which has already started).

However, building these has proven to be tricky. In addition, for ffmpeg, building libraries with support for most codecs also makes the resulting binaries non-redistributable (they would contain non-free code mixed with GPL code). This might not be the end of the world (a GPL version still supports libx264), but it would be nice to still allow users to build and install a non-redistributable kitchen-sink version.

If anyone has suggestions on how to proceed here, I’m very open to suggestions.


1 Like

Also an issue now for ImageView. I think the reason for any alarm in this thread is that the arrival of BinaryBuilder and its usage in part (not all) of the stack has made it suddenly become an issue for some packages that have not formerly experienced it. But I think the right way forward is through; however, if Pkg can help resolve the intermediate troubles it would be a wonderful thing.

Meanwhile, I’ll see if it’s possible to get ImageMagick to link zlib statically.

1 Like

Hi @Ralph_Smith @StefanKarpinski ,

I feel your pain about:
"DLL problems, episode 2”
aka “DLL Hell I’ve seen many times BEFORE”
aka “NameSpace Collision”
aka “Real Coders ONLY work on the Command Line and Link Code with Deer sinews, Tobacco Juice and a smirk”
aka “No One wants to admit their new software baby has some ugly features that need fixed”
Yes – as a professional software engineer of many years I have heard and seen it all ; oh and I’ll add one more “The CMM Level Zero (0) Hero who is FOREVER putting out EXACTLY as many fires as they start”

Issue: At least as of v0.7 Circa 2018 - Pkg.add is a Crap shoot Spin of the wheel that often randomly breaks the build across random packages – something that is a huge NO-NO for anyone using Julia in the real world with real commitments.

Ergo here’s a good start at the REQUIRED FUNCTIONALITY if Julia wants to come play with the Big Boys ! … like ANY Production database build in DevOps functionality that is BOTH ATOMIC and allows us to ROLLBACK to SAVEPOINTS easily and flawlessly.

But I like, yes maybe even love, Julia so I won’t leave us hung up at “Bug Descriptions” …
so Here is A Start at the Design Requirements for a Proposed Solution:
I particularly like that DeclarativePackages.jl or jdp is heavily inspired by the nix package manager with the functional requirements design described here >> [https://nixos.org/nix/about.html 1](http://GREAT Devops Design Ideas for Julia Package Managers from Nix) "Multiple versions , AND Atomic upgrades , and ROLLBACKS

You can have multiple versions or variants of a package installed at the same time. This is especially important when different applications have dependencies on different versions of the same package — it prevents the “DLL hell”. Because of the hashing scheme, different versions of a package end up in different paths in the Nix store, so they do NOT interfere with each other."

Need MORE design requirements ? – take a look At Synaptic package manager here -
Synaptic software package manager

  • Install, remove, upgrade and downgrade single and multiple packages
  • System-wide upgrade
  • Package search utility

or Yes – CONDA or ( https Secured ) PIP is great too !!

AND did I mention we REALLY NEED "Atomic upgrades and ROLLBACKS ? :sunglasses::grin:

Julia’s new package manager already has most if not all of these features.


Hi @StefanKarpinski

Thank You for your quick reply.

And Thank You I have been looking all over for Savepoint and Restore/Rollback for Julia package management and I can’t seem to find that, might you send me a link to the documentation or command syntax please ?

The exact state of all Julia dependencies is recorded in the Manifest.toml file which can be local to a project or shared (or some combination of both using the LOAD_PATH environment stacking mechanism). You can and should store your project’s manifest in version control along with the rest of its source code, which thereby automatically gives dependencies the same level of checkpointing and rollback as your source code. I’m not entirely sure what you want from “atomic”—the term is abused quite a lot—but if something goes wrong with any change to your project’s dependencies, you can just revert to the last committed state. You can read about the design here:



@StefanKarpinski Thank You for your excellent timely answer :+1: You answered all my questions here, and I’m still digging that Helmet :+1::sunglasses: !