I just bumped the julia version to 1.6.0 in a Docker image we use in my organization. I now see repetitions of the same error message when doing package operations. For instance:
(@v1.6) pkg> update
Updating registry at `/build/jl/depot/registries/General`
┌ Error: curl_easy_setopt: 48
└ @ Downloads.Curl /buildworker/worker/package_linux64/build/usr/share/julia/stdlib/v1.6/Downloads/src/Curl/utils.jl:36
┌ Error: curl_easy_setopt: 48
└ @ Downloads.Curl /buildworker/worker/package_linux64/build/usr/share/julia/stdlib/v1.6/Downloads/src/Curl/utils.jl:36
┌ Error: curl_easy_setopt: 48
└ @ Downloads.Curl /buildworker/worker/package_linux64/build/usr/share/julia/stdlib/v1.6/Downloads/src/Curl/utils.jl:36
No Changes to `/build/jl/depot/environments/v1.6/Project.toml`
No Changes to `/build/jl/depot/environments/v1.6/Manifest.toml`
I would call the message “noise” because I don’t see any evidence that the operation actually failed.
I hesitate to suggest this is a bug in julia or its packages, because, while I can consistently reproduce this problem in instances of our Docker image, I can’t reproduce it when running julia 1.6.0 on bare metal or in the official julia:1.6.0 Docker image. So I figured I would post here on Discourse before opening an issue at Downloads.jl. I see there is already an open issue there related to the package manager in 1.6.0, but I tried the suggested workaround there, and it doesn’t fix my issue, so I’m guessing it is unrelated.
For completeness:
julia> versioninfo()
Julia Version 1.6.0
Commit f9720dc2eb (2021-03-24 12:55 UTC)
Platform Info:
OS: Linux (x86_64-pc-linux-gnu)
CPU: AMD Ryzen 7 3700X 8-Core Processor
WORD_SIZE: 64
LIBM: libopenlibm
LLVM: libLLVM-11.0.1 (ORCJIT, znver2)
I suppose the next thing I can try to do is start commenting out RUN statements in my Dockerfile to see if I can isolate what makes my image different.
This looks like an error that was present in 1.6.0-beta and possibly 1.6.0-rc1. It should definitely be fixed in 1.6.0 final. Are you sure you are running 1.6.0 in docker? The official images are still on 1.6.0-rc3 (for me, at the time of writing).
Yes, the official image is still using RC3, but I am definitely using the official release in my own image where I am experiencing the problem. This bug is rather slippery, it seems. I worked up a MWE Dockerfile that could reproduce the problem, but then, as I was polishing it up, it stopped working, and I couldn’t get it to work again. I’m starting over right now.
At one point it looked like I could exactly isolate it to the installation of libcurl4-gnutls-dev, while the installation of all of its dependencies caused no issue.
Here is the contents of a Dockerfile containing a MWE:
# syntax = docker/dockerfile:1.0-experimental
# When based off these images, we don't see the failure
#FROM ubuntu:18.04
#FROM nvidia/cudagl:11.2.2-base-ubuntu20.04
# Based off this image, we see the failure.
FROM nvidia/cudagl:11.2.2-base-ubuntu18.04
RUN \
apt-get update \
&& apt-get upgrade -y \
&& DEBIAN_FRONTEND=noninteractive apt-get install --fix-missing --no-install-recommends -y \
ca-certificates \
curl \
libcurl4-gnutls-dev # <-- Culprit somehow? When this is commented out, there is no failure. \
libcurl3-gnutls # The only additional dependency brought in by libcurl4-gnutls-dev. By itself, it causes no problem. \
&& rm -rf /var/lib/apt/lists/*
ARG JULIA_MAJOR_MINOR_VERSION="1.6"
ARG JULIA_PATCH_LEVEL="0"
ARG JULIA_DL_URL="https://julialang-s3.julialang.org/bin/linux/x64/1.6"
ARG JULIA_VERSION="${JULIA_MAJOR_MINOR_VERSION}.${JULIA_PATCH_LEVEL}"
ARG JULIA_PACKAGE="julia-${JULIA_VERSION}-linux-x86_64.tar.gz"
ARG JULIA_INSTALL_DIR="/usr/local/julia"
WORKDIR /usr/src/julia
RUN --mount=type=secret,id=netrc curl --netrc-file /run/secrets/netrc \
${JULIA_DL_URL}/${JULIA_PACKAGE} \
-o ${JULIA_PACKAGE} \
&& mkdir ${JULIA_INSTALL_DIR} \
&& tar -xf ${JULIA_PACKAGE} -C ${JULIA_INSTALL_DIR} --strip=1 \
&& rm ${JULIA_PACKAGE}
ENTRYPOINT [ "/usr/local/julia/bin/julia", "-e", "import Pkg; Pkg.update()" ]
I tried as far as I could to unwind the base Dockerfile definitions down to ubuntu:18.04 but couldn’t manage to get further than nvidia/cudagl:11.2.2-base-ubuntu18.04 while still being able to reproduce the error.
I guess this suggests the issue is somewhere in the nvidia Docker images? That’s not super helpful for me, though, as we need our image to be based off of nvidia/cudagl…
I think the workaround I’m going to go with is to remove the dependency that ends up bringing in libcurl4-gnutls-dev, since I don’t believe we actually end up needing it for our purposes. Still, not the most satisfying resolution.
I don’t really see how the installation of another libcurl should matter. Julia should use the one that it comes bundled with. Is something setting LD_LIBRARY_PATH?
Thanks for the lead. Yes, it gets set in the nvidia/cudagl images. I don’t know that it is an option to unset it in our use case, but it is at least much more satisfying to have a plausible explanation.