So, it’s a bit complicated to explain the set up here, but it’s rare (and refreshing!) to get an error that pulls no punches and says “don’t do this”:
ERROR: LoadError: Evaluation into the closed module `anonymous` breaks incremental compilation because the side effects will not be permanent. This is likely due to some other module mutating `anonymous` with `eval` during precompilation - don't do this.
Stacktrace:
The offending code is the below, which I am attempting to precompile (as part of a longer workflow) in a @compile_workflow block
"""
exprs_to_AA_polys(exprs, vars)
Convert each symbolic expression in `exprs` into a polynomial in an
AbstractAlgebra polynomial ring in the variables `vars`. This returns
both the ring `R` and the vector of polynomials in `R`.
"""
function exprs_to_AA_polys(exprs, vars)
# Create a polynomial ring over QQ, using the variable names
M = Module()
Base.eval(M, :(using AbstractAlgebra))
#Base.eval(M, :(using Nemo))
# Base.eval(M, :(using RationalUnivariateRepresentation))
# Base.eval(M, :(using RS))
var_names = string.(vars)
ring_command = "R = @polynomial_ring(QQ, $var_names)"
#approximation_command = "R(expr::Float64) = R(Nemo.rational_approx(expr, 1e-4))"
ring_object = Base.eval(M, Meta.parse(ring_command))
#println(temp)
#Base.eval(M, Meta.parse(approximation_command))
a = string.(exprs)
AA_polys = []
for expr in exprs
push!(AA_polys, Base.eval(M, Meta.parse(string(expr))))
end
return ring_object, AA_polys
end
What does this function do? It’s using string parsing to go between competing polynomial formats. I don’t love it, but it’s much better off in an anonymous module than in Main or somewhere else.
Questions:
Is this in general bad? It seems to break precompilation, but does it break other stuff and should never be done?
Is it only coincidental that it breaks precompilation, and I can just wait a few Julia versions and it’ll maybe just work?
What should I do in the meantime? (my package compiles for >50 seconds)
Precompilation is essentially package-wise AOT compilation, and there’s quite a few things that can’t do. You’re running into one of the more complicated things, but a simpler example is process-wise rand values. While dedicated AOT-compiled languages basically give us no choice but to cleanly separate compilation and runtime initialization, we have to be more careful about when we put calls into __init__ because code that runs just fine at runtime, e.g. REPL includes, can easily break precompilation. @compile_workflow does execute its expression, and a way around that is to use precompile instead of normal calls.
However, in this case you have a function that is parsing strings (which don’t have a guarantee of being evaluable code like expressions) for the sole purpose of calling a macro on runtime objects instead of source code as intended. The documentation doesn’t take me to the implementation (likely due to metaprogramming), but it strongly implies that the function polynomial_ring is the better option, especially if you don’t want more global names polluting the modules you care about.
By the way, write @ names between Markdown backticks `like this`, escaped by backslashes \ there, so you’re not writing a user tag.
Ok, to explain in more detail what this does, it takes a (vector of) polynomials built up from types from JuliaSymbolics and converts it to polynomails built-up from types in the AbstractAlgebra package. I’ve thought about this a bit, here are some pros and cons of two different solutions:
Solution 1) Walk the tree of the internal representation in JuliaSymbolics and grow the AbstractAlgebra polynomial by “replaying” it on AbstractAlgebra variables
Pros: No security issues, no modules, no eval, can precisely control what’s going on
Cons: need to know package internals of both packages, and those change pretty frequently, and are very detailed and complicated. Lots of edge cases.
Solution 2: Cast to a string, eval the string on variables from the other package
Pros: Works for anything built to be used in the Julia REPL, code is very simple, uses very few assumptions on internal reps
Cons: uses eval, can lose or gain precision, subject to simplification with no control, module pollution
I tried but never got solution (1) to work, so that settled that. It would be really nice if package developers provided these conversions, but they don’t have all day.
Having said all that, is there a way I can isolate this offensive code so that everything else in my package precompiles and gets cached? Would a submodule do the trick? (would seperating into yet another package help? that is a sad solution.)
I was also hoping to hear that anonymous scratch modules (it’s sort of being used as a “local variable” here) aren’t prone to the sorts of side effects the error is trying to protect against, and that a future theoretical version of the precompiler could see that. but I’m not 100% sure about that point.
Try substitute-ing the symbolic variables in Symbolics expressions with the generators from polynomial_ring, it seems plausible on a cursory attempt:
julia> using Symbolics, AbstractAlgebra
julia> symp, aap = let
@variables x y z
symp = x + y^2 + z^3
S, aaxyz = polynomial_ring(ZZ, Symbol.([x,y,z]))
symp, substitute(symp, Dict(zip((x,y,z), aaxyz)))
end
(x + y^2 + z^3, x + y^2 + z^3)
julia> typeof.((symp, aap))
(Num, AbstractAlgebra.Generic.MPoly{BigInt})
Evaluating arbitrary Julia expressions at runtime has to be done in the global scope, and that’s not something you want to do indefinitely because functions and modules just stick around and build up (because of references, so I’m not sure if an anonymous module is an exception to that). Of course, if you want to prevent globally scoped code from polluting the global scope with names, you could use a let block, which evaluates to its last line so it can serve as a “return” from eval calls.