This can show up in many ways though. If you have two modules which overload the same Base function, but with different methods on the same dispatch, the final method that you get is now dependent on the DAG. If you want to hash all of the imported code to say, track code changes to remove precompilation, you can have invalidation occur because the DAG’s heuristics change. One major global that is hit in all languages is I/O: if two modules write to a file at load time, the ordering of the contents in the file are no longer determined. And I can keep going but you probably get the point that there’s a ton of practical issues that this does cause.
That of course doesn’t make being explicit better, it’s just the trade-off that’s being made. One way to avoid these issues is to claim that anything that would be non-deterministic given this behavior is simply bad coding style. Sure, but you can see what happened to the Python ecosystem with this: many packages simply do not work with alternative Python compilers because they are dependent on very internal details about the CPython implementation. This has caused many issues, one notable one being that this pretty much halted the adoption of alternative interpreter implementations like pypy.
With Julia, one of the core concepts being explored is to have a very small compiler surface so that alternative interpreters (JuliaInterpreter.jl, the new Abstract Interpreter, Mjolnir, etc.) can all take Julia code, easily “know what to do”, and compose. The more details that tie code execution order and output to specific heuristics of the compiler, the harder it is to support such an ecosystem in a way where using these alternatives on a package can be expected to “just work”. Of course, there are always compiler heuristics that can be helpful, and it’s more of a “degree of dependence” (one important one is how reliant these compiler tools can be on type-inference heuristics).