Memory representation of wrapper types

I am positively amazed at the following capability:

julia> struct Wrapper
           val::Float64
       end

julia> w = [Wrapper(randn()) for _ in 1:3]
3-element Vector{Wrapper}:
 Wrapper(0.071)
 Wrapper(0.029)
 Wrapper(-0.252)

julia> reinterpret(Float64, w)
3-element reinterpret(Float64, ::Vector{Wrapper}):
  0.071
  0.029
 -0.252

where reinterpret “changes the type interpretation of a block of memory. [It] contstructs a view with the same binary data as the given array, but with the specified element type”.

We thus get an identical memory representation for a vector of numbers and a vector of wrapped numbers.

This behaviour seems to be what allows working with Arrays of Unitful Quantities to be just as fast as working with plain numeric Arrays (after compilation time at least, at which time units are handled).

How does this behaviour work behind the scenes?

the tldr is that Julia stores a type tag for each variable where the compiler doesn’t know the type. when inference is able to figure out the type at compile time, you just don’t store the tag.

2 Likes

The storage of a vector of isbitstypes doesn’t really have anything to do with what inference can figure out. It’s just the way the array is allocated. If the element type of an array is of isbitstype then the objects are stored inline.

2 Likes

My understanding is that Julia objects are stored in memory in a style of metadata + value. The value part is a blob of memory that stores the bits and the metadata part stores additional info about the Julia object. You could add/remove those metadata through box/unbox-ing.

In your example, both Wrapper and Float64(the value part) have the same size: sizeof(Wrapper) == sizeof(Float64), so it’s OK to drop the datatype info from Wrapper and reinterpret the value part as Float64. @kristoffer.carlsson explained why this also works for an array.

1 Like