FWIW, I agree with your approach to understanding: Mentally compile your code into unoptimized assembly; then allow llvm to work its magic to give you the assembly you should have written in the first place. The meaning of assembly is sufficiently unambiguous that all these questions evaporate (replace assembly by llvm-IR as appropriate).
Always asking “how would I compile that” also allows you to guess how things work: They work in a way that can be compiled in a reasonable way. @yuyichao is right that “low level compiler detail has not been intuitive since even before I was born” (he is always right, even though it is sometimes necessary to meditate on his words).
Luckily, compilers go to great effort to preserve the illusion that they are simple; i.e. complicated optimizing compilers emit code that has the same observable behavior as naive “platonic” compilers.
Assembly is a high-level abstraction as well: Whatever you wrote, the CPU will probably do something else (much of it at the same time). Whenever you care to look, it will then go to great pains to retcon an “architected state” that is compatible with the high-level abstraction that people can understand (i.e. assembly). Sometimes this abstraction leaks, and hilarity ensues (spectre, meltdown, etc). Sometimes the key to fast code is to play “if I were a superscalar processor, how should I execute this assembly?” (i.e. not one instruction after the other). It is turtles all the way down.