Julia's stability guarantee (e.g. precompilation and "incremental compilation may be fatally broken for this module")

As far as I know Julia has 1) a syntax guarantee, and 2) an API guarantee (for documented non-experimental stuff)

But that doesn’t seem to be enough in practice, for packages. I see some precompilation issues, and workaround in one package:

I’m not too worried, about that package, it’s in good hands, and fixed already, I’m just curious about in general, since there was an issue for that package, it could happen to any package?

Precompilation is the default (once wasn’t), can be turned off. So what’s the problem with it, and could at least potential issues always be worked around by turning it off?

Why didn’t PkgEval catch this? I’m guessing it doesn’t run on Windows, so is Linux a bit better supported platform? Was this simply a bug, and precompilation and packages on all platforms supposed to be guaranteed?

At least part of the reason was a dependency of Plots (or older version of) StatsPlots and:

  [9] _show(io::IOStream, #unused#::MIME{Symbol("image/png")}, plt::WARNING: importing deprecated binding Colors.RGB1 into PlotUtils.
WARNING: importing deprecated binding Colors.RGB1 into Plots.
WARNING: importing deprecated binding Colors.RGB4 into PlotUtils.
WARNING: importing deprecated binding Colors.RGB4 into Plots.

I also see in logs: DUMMY: investigating Windows precompilation failure in 1.8.2 by BioTurboNick · Pull Request #4445 · JuliaPlots/Plots.jl · GitHub

 WARNING: Method definition uv_readcb(Ptr{Nothing}, Int64, Ptr{Nothing}) in module Base at stream.jl:656 overwritten in module Plots at D:\a\Plots.jl\Plots.jl\src\backends\gr.jl:707.
  ** incremental compilation may be fatally broken for this module **

WARNING: Method definition create_expr_cache(Base.PkgId, String, String, Array{Pair{Base.PkgId, UInt64}, 1}) in module Base at loading.jl:1661 overwritten in module Plots at D:\a\Plots.jl\Plots.jl\src\backends\gr.jl:758.
  ** incremental compilation may be fatally broken for this module **

WARNING: Method definition create_expr_cache(Base.PkgId, String, String, Array{Pair{Base.PkgId, UInt64}, 1}, IO) in module Base at loading.jl:1661 overwritten in module Plots at D:\a\Plots.jl\Plots.jl\src\backends\gr.jl:758.
  ** incremental compilation may be fatally broken for this module **

WARNING: Method definition create_expr_cache(Base.PkgId, String, String, Array{Pair{Base.PkgId, UInt64}, 1}, IO, IO) in module Base at loading.jl:1661 overwritten in module Plots at D:\a\Plots.jl\Plots.jl\src\backends\gr.jl:758.
  ** incremental compilation may be fatally broken for this module **

The possibility of that warning “incremental compilation may be fatally broken for this module” on 29 Mar 2019, so I believe in Julia 1.2.

I don’t want to be misinforming people (nor too positive). I think I know pretty much why/when Julia packages are stable, and the few exceptions (did didn’t wok on newer Julia), e.g. Cxx.jl.

1 Like

What is the question? There seems to be a regression in 1.8 on Windows, yes. There is an issue open to track it. When it gets fixed the fix can be backported into 1.8.x.

This has nothing to do with any stability guarantee since that is about intentional changes.

2 Likes

Ok, it was a simple bug (“regression”), and no guarantee guards against that, so the stability guarantee more of a commitment to fix bugs, and try to prevent them (and is of course meant for package code too). Why you got me confused with “that is about intentional changes”.

Are you saying the changes where in Plots.jl or some of its dependencies (e.g. StatsPlots)? Or some “intentional changes” in Julia, since the workaround in the package is for the still open issue in Julia (note Plots.jl has though be fixed):

So I repeat my (second) question:

Why didn’t PkgEval catch this?

Since this can happen, it can happen to other (Windows), possibly non-registered packages. Then of course you should file a bug. I think the bug should be obvious (or not, since only “WARNING: […]
** incremental compilation may be fatally broken for this module **”).

The stability guarantee is as complete as I want it, one more exception is that the stream of RNG numbers isn’t guaranteed across major versions (and that’s documented, changed from 1.6 LTS to 1.7, and could change again).

And Julia can’t be more stable than IEEE allows. +, -, *, and / are, and I believe at least ^ and sqrt too but not e.g. sin. Calculated values of sin have improved in more recent Julia versions, so you can’t expect bit-identical results for floats (I believe that’s the “table maker’s dillemma”), while you can for ints (except involving rand). I think that would also apply to e.g. Posit standard, that specifies:

sin(posit) returns sin(posit), rounded.
sinPi(posit) returns sin(𝜋 × posit), rounded

Expecting stability for floating point is already not correct. Different computers might give different results for the same operations due to SIMD and FMA differences.

1 Like

By some, not by me, I’m just clarifying for others reading, what I believe to be true, and as I wrote it “is as complete as I want it”.

Some people might disagree with not having stability of floating point (for the major operations), e.g. Kahan, why strictfp was added to Java (since it’s become a no-op):

https://news.ycombinator.com/item?id=13739793

If the author of the IEEE 754 standard says that the way you do floating-point arithmetic is obviously a bug, you’re probably doing it wrong.

That’s interesting, because it seems to contradict his attitude toward FLT_EVAL_METHOD.

Kahan is a strong supporter of FLT_EVAL_METHOD==2—in fact, he’s quite possibly the single person who’s most responsible for its existence—and FLT_EVAL_METHOD==2 has been resoundly rejected by compilers on newer architectures due to being totally unintuitive. (For those who don’t know, FLT_EVAL_METHOD==2 means that “x∗y+z” can have a different result from “double tmp = x∗y; tmp + z”, even if x, y, and z are doubles. It’s a very similar situation to the bug, just with expression temporaries vs. explicit storage locations instead of register-memory spills.)

In particular, Kahan wrote a widely-shared, hyperbolic essay called “Why Java Floating Point Hurts Everyone Everywhere” (which is still sometimes shared around today as a way to attack Java, despite being obsolete) castigating Java for having FLT_EVAL_METHOD==0. Java added the little-used “strictfp” in response, but history proved Java’s original behavior right—with the advent of SSE on x86-64, GCC switched to FLT_EVAL_METHOD==0 by default, matching Java.

At FLT_EVAL_METHOD - cppreference.com

#define FLT_EVAL_METHOD /* implementation defined */ (since C++11)

Does Julia have something similar to FLT_EVAL_METHOD?

I imagine LLVM does support something like this. That maybe could be exposed like @fastmath is, but that’s just a guess.

This is partly true and partly false. To clarify for other readers, Julia does not apply such transformations unless expressly permitted (via @fastmath, @simd, or muladd, for example – although I recall muladd sometimes leaking to adjacent operations). Code like

% not inside a @fastmath block
w = x * y + z

will always give exactly the same result for the same x, y, and z on any IEEE754-compliant architecture. It will not apply fma without an appropriate annotation. The operations + , -, *, /, sqrt, fma, rem are always correctly rounded except when transformed by a flag. Note that functions like sum may apply such flags, although foldl(+,...) will not.

However, nonprimitive operations like exp, sin, etc, are not strictly defined by IEEE754. Julia makes no guarantee that they will produce the same values between versions, nor that they will match the values produced by any other library or language. If you discover a substantial error in one of these operations, you can submit an issue and someone smart will eventually take a look to see if it can be improved.

3 Likes