I can see that Dict is implemented in terms of Memory. But what is this?
It doesn’t appear to have a documentation page. Perhaps that is because Memory is an abstraction layer for blocks of memory returned by the OS from malloc/free, and therefore is an internal implementation which is not intended to be exposed to users/clients.
but I couldn’t actually find where this line of code is.
A quick scan of this file suggests GenericMemory might be an abstraction over CPU, GPU and possibly other types of device memory.
I found this line too:
`kind` can currently be either `:not_atomic` or `:atomic`. For details on what `:atomic` implies, see [`AtomicMemory`](@ref)
A further question is raised. What is not_atomic/atomic?
I know what atomic and non-atomic operations are in terms of things like the atomic add operation which is implemented as a CPU instruction on modern CPUs.
But what does it mean for memory to be atomic? Is GPU memory access atomic, as a possible guess? I did do some CUDA a long time ago and I have a hunch that the access to memory in a CUDA program might be atomic, or synchronized in some way. I could be wrong.
Exactly that. In 1.10 and earlier, the most primitive abstraction for a block of memory was Array, but it has more metadata and machinery associated with it. The main driver to add Memory as a thin wrapper over a malloced block was that some high-level operations like push! or pop! previously had to be implemented as ccall preventing compiler optimizations.
There are some other pretty major drivers. Size mutability also prevents some optimizations (e.g. eliding boundschecks should be easier when the size can’t change), also the ability to support AtomicMemory (and possibly ConstMemory) is pretty nice.
Side gripe about Julia: I really wish our ecosystem would shift from “don’t document internals” towards “document them but mark them as internals”, or better yet, “document them because everything not marked public (in v1.11 and later) is internal by default”. Especially for something as important as the new Memory.
The documentation for Memory sure is anemic, though. From just reading the doxstrings, it’s hard to know that Memory is an alias of nonatomic CPU GenericMemory, that it’s the lowest level abstraction of a chunk of memory owned by the Julia runtime, that it may stire the data inline, as pointers or with the isbits union optimisation, how to construct new instances of Memory, etc.
I used to advocate for (and implement) systems to manage a bit of metadata in language documentation. For example a tag that marks a doc string as developer documentation; not in the API. Cross references could be put in tags so that you don’t have to remember the correct markup. I implemented a “see-also group”, a list of names that caused each member of the group to cross-reference the others in the documentation. This would also allow documentation to be very regular, like it is in Mathematica, which clearly uses a meta data system even though it’s not visible. For example the “see also” section is always rendered in the same form and in the same place.
For example, searching for 30 seconds in Julia I find this: The doc string for fill says See also: fill!, zeros, ones, similar. For similar it says See also: undef, isassigned. For zeros it says See also: fill, ones, zero. And so on. Notice the irregularity.
To me it seems obvious that this is the way to go. But I never, or maybe almost never, see this implemented for a language. Sure there are markup elements that are understood by the documentation systems in Julia, Python, Rust, etc. But they are cumbersome, syntax laden, boiler-platy, and unsystematic. I suspect I don’t see it because there are some reasons to reject the metadata approach that have not occurred to me.
(If I searched Discourse, I’d probably discover that I’ve already written this comment before )