Multithreaded broadcast on a grouped DimensionalData array

Hello:
I am using the Rasters.jl package to read and work with data on a spatial and a time dimension (two dimensions). So it is a Raster with two dimensions.

I have gropued the raster by means of the groupby() function. So, I have done:

time_name = :Ti
datavar = dataset[var_name]
datavar_grouped = groupby(datavar, time_name => dayofyear)
datavar_mean = mean.(datavar_grouped, dims=time_name)

But I want to parallelize the last line. I have searched the internet for a method and it seems that the best one could be this one. According to it, I do as follows:

julia> @avxt var_mean = mean.(datavar_grouped, dims=time_name)
ERROR: LoadError: TypeError: in typeassert, expected Expr, got a value of type GlobalRef
Stacktrace:
 [1] 
   @ LoopVectorization ~/.julia/packages/LoopVectorization/tIJUA/src/constructors.jl:70
 [2] turbo_macro(mod::Module, src::LineNumberNode, q::Expr, args::Expr)
   @ LoopVectorization ~/.julia/packages/LoopVectorization/tIJUA/src/constructors.jl:295
 [3] var"@tturbo"(__source__::LineNumberNode, __module__::Module, args::Vararg{Any})
   @ LoopVectorization ~/.julia/packages/LoopVectorization/tIJUA/src/constructors.jl:415
in expression starting at REPL[44]:1
Some type information was truncated. Use `show(err)` to see complete types.

julia> show(err)
1-element ExceptionStack:
LoadError: TypeError: in typeassert, expected Expr, got a value of type GlobalRef
Stacktrace:
 [1] substitute_broadcast(q::Expr, mod::Symbol, inline::Bool, u₁::Int8, u₂::Int8, v::Int8, threads::Int64, warncheckarg::Int64, safe::Bool)
   @ LoopVectorization ~/.julia/packages/LoopVectorization/tIJUA/src/constructors.jl:70
 [2] turbo_macro(mod::Module, src::LineNumberNode, q::Expr, args::Expr)
   @ LoopVectorization ~/.julia/packages/LoopVectorization/tIJUA/src/constructors.jl:295
 [3] var"@tturbo"(__source__::LineNumberNode, __module__::Module, args::Vararg{Any})
   @ LoopVectorization ~/.julia/packages/LoopVectorization/tIJUA/src/constructors.jl:415
in expression starting at REPL[44]:1

Could anybody help to solve this error or to suggest another method for parallelizing the broadcast?
Thanks in advance.

My main suggestion would be to try something like the tmap function from OhMyThreads.jl rather than LoopVectorization.jl, since the latter’s development is unfortunately being phased out and it will likely stop working with the next release of Julia.

You can use tmap like this:

using OhMyThreads

[...]
datavar_mean = tmap(group -> mean(group; dims=time_name), datavar_grouped)

If you would still like to use LoopVectorization.jl, here’s what I think you need to do (I’m using the current name @tturbo rather than the former, deprecated name @avxt; they do the same thing):

mymean(group) = mean(group; dims=time_name)
datavar_mean = @tturbo mymean.(datavar_grouped)

Thanks a lot, @danielwe , I didn’t know about OhMyThreads.jl. I have tested both of your suggestions but still get errors:

julia> datavar_mean = tmap(group -> mean(group; dims=time_name), datavar_grouped)
ERROR: MethodError: no method matching resize!(::DimensionalData.DimGroupByArray{Raster{…}, 1, Tuple{…}, Tuple{}, Vector{…}, Symbol, Dict{…}}, ::Int64)

Closest candidates are:
  resize!(::LoopVectorization.LoopOrder, ::Int64)
   @ LoopVectorization ~/.julia/packages/LoopVectorization/tIJUA/src/modeling/graphs.jl:439
  resize!(::SparseArrays.ReadOnly, ::Any)
   @ SparseArrays ~/.julia/juliaup/julia-1.10.4+0.x64.linux.gnu/share/julia/stdlib/v1.10/SparseArrays/src/readonly.jl:33
  resize!(::SparseArrays.UMFPACK.UmfpackWS, ::Any, ::Bool; expand_only)
   @ SparseArrays ~/.julia/juliaup/julia-1.10.4+0.x64.linux.gnu/share/julia/stdlib/v1.10/SparseArrays/src/solvers/umfpack.jl:214
  ...

Stacktrace:
  [1] _append!(a::DimensionalData.DimGroupByArray{…}, ::Base.HasShape{…}, iter::DimensionalData.DimGroupByArray{…})
    @ Base ./array.jl:1196
  [2] append!(a::DimensionalData.DimGroupByArray{…}, iter::DimensionalData.DimGroupByArray{…})
    @ Base ./array.jl:1187
  [3] _append!(dest::DimensionalData.DimGroupByArray{…}, src::DimensionalData.DimGroupByArray{…})
    @ BangBang ~/.julia/packages/BangBang/KwWuG/src/base.jl:142
  [4] may
    @ ~/.julia/packages/BangBang/KwWuG/src/core.jl:9 [inlined]
  [5] __appendto!!__
    @ ~/.julia/packages/BangBang/KwWuG/src/base.jl:139 [inlined]
  [6] __append!!__
    @ ~/.julia/packages/BangBang/KwWuG/src/base.jl:128 [inlined]
  [7] append!!(xs::DimensionalData.DimGroupByArray{…}, ys::DimensionalData.DimGroupByArray{…})
    @ BangBang ~/.julia/packages/BangBang/KwWuG/src/base.jl:118
  [8] _mapreduce(f::typeof(fetch), op::typeof(BangBang.append!!), ::IndexLinear, A::Vector{StableTasks.StableTask{Any}})
    @ Base ./reduce.jl:440
  [9] _mapreduce_dim(f::Function, op::Function, ::Base._InitialValue, A::Vector{StableTasks.StableTask{Any}}, ::Colon)
    @ Base ./reducedim.jl:365
 [10] mapreduce(f::Function, op::Function, A::Vector{StableTasks.StableTask{Any}})
    @ Base ./reducedim.jl:357
 [11] _tmapreduce(f::Function, op::Function, Arrs::Tuple{…}, ::Type{…}, scheduler::DynamicScheduler{…}, mapreduce_kwargs::@NamedTuple{})
    @ OhMyThreads.Implementation ~/.julia/packages/OhMyThreads/PtzLw/src/implementation.jl:96
 [12] #tmapreduce#21
    @ ~/.julia/packages/OhMyThreads/PtzLw/src/implementation.jl:68 [inlined]
 [13] _tmap(::DynamicScheduler{…}, ::Function, ::DimensionalData.DimGroupByArray{…})
    @ OhMyThreads.Implementation ~/.julia/packages/OhMyThreads/PtzLw/src/implementation.jl:435
 [14] #tmap#103
    @ ~/.julia/packages/OhMyThreads/PtzLw/src/implementation.jl:357 [inlined]
 [15] tmap(::Function, ::DimensionalData.DimGroupByArray{…})
    @ OhMyThreads.Implementation ~/.julia/packages/OhMyThreads/PtzLw/src/implementation.jl:323
 [16] top-level scope
    @ REPL[56]:1
Some type information was truncated. Use `show(err)` to see complete types.

And with LoopVectorization.jl:

julia> mymean(group) = mean(group; dims=time_name)
mymean (generic function with 1 method)

julia> datavar_mean = @tturbo mymean.(datavar_grouped)
ERROR: MethodError: no method matching vmaterialize!(::DimVector{…}, ::Base.Broadcast.Broadcasted{…}, ::Val{…}, ::Val{…}, ::Val{…})

Closest candidates are:
  vmaterialize!(::Any, ::Any, ::Val{Mod}, ::Val{UNROLL}) where {Mod, UNROLL}
   @ LoopVectorization ~/.julia/packages/LoopVectorization/tIJUA/src/broadcast.jl:753
  vmaterialize!(::Union{Adjoint{T, A}, Transpose{T, A}}, ::BC, ::Val{Mod}, ::Val{UNROLL}, ::Val{dontbc}) where {T<:Union{Bool, Float16, Float32, Float64, Int16, Int32, Int64, Int8, UInt16, UInt32, UInt64, UInt8, SIMDTypes.Bit}, N, A<:AbstractArray{T, N}, BC<:Union{Base.Broadcast.Broadcasted, LoopVectorization.Product}, Mod, UNROLL, dontbc}
   @ LoopVectorization ~/.julia/packages/LoopVectorization/tIJUA/src/broadcast.jl:682
  vmaterialize!(::AbstractArray{T, N}, ::BC, ::Val{Mod}, ::Val{UNROLL}, ::Val{dontbc}) where {T<:Union{Bool, Float16, Float32, Float64, Int16, Int32, Int64, Int8, UInt16, UInt32, UInt64, UInt8, SIMDTypes.Bit}, N, BC<:Union{Base.Broadcast.Broadcasted, LoopVectorization.Product}, Mod, UNROLL, dontbc}
   @ LoopVectorization ~/.julia/packages/LoopVectorization/tIJUA/src/broadcast.jl:673
  ...

Stacktrace:
 [1] vmaterialize(bc::Base.Broadcast.Broadcasted{…}, ::Val{…}, ::Val{…})
   @ LoopVectorization ~/.julia/packages/LoopVectorization/tIJUA/src/broadcast.jl:776
 [2] top-level scope
   @ REPL[59]:1
Some type information was truncated. Use `show(err)` to see complete types.

Perhaps an incompatibility with the DimensionalData type?

Yes, the errors you’re getting now are specific to the DimensionalData type. The invocation of tmap/@tturbo is correct, but both libraries make assumptions about their argument that do not hold for DimensionalData.DimGroupByArray. Unfortunately, I’m not familiar with DimensionalData, so can’t really help there.

A third approach you might try is FastBroadcast.jl. It may or may not run into the same issues. It would look like this:

using FastBroadcast

[...]
mymean(group) = mean(group; dims=time_name)
datavar_mean = @.. thread=true mymean(datavar_grouped)

Great! This works!.
I have tested also the thread=false optiion, and also the default calculation in order to compare them:

julia> @btime $datavar_mean = mymean($datavar_grouped)
  29.959 s (125344 allocations: 90.73 GiB)
julia> @btime $datavar_mean = @.. thread=false mymean($datavar_grouped)
  8.580 s (136092 allocations: 3.12 GiB)
julia> @btime $datavar_mean = @.. thread=true mymean($datavar_grouped)
  9.055 s (136092 allocations: 3.12 GiB)

So, it seems that FastBroadcast.jl optimization with no parallelization, are better than with parallelization. Perhaps parallelization is not worth for my problem, but other optimizations made by FastBroadcast.jl , are.

Anyway, I am still curious to know if parallelization really could reduce the execution time or it is definitely not worth. It is still not clear to me.

You forgot the broadcasting on this benchmark. Try redoing it with @btime mymean.($datavar_grouped) (note the dot). Also, I don’t think it makes sense to interpolate datavar_mean, that name is being assigned by this computation, no?

Wow!, silly me!, sorry for the mistake. Here it is, with the dot:

julia> @btime datavar_mean = mymean.($datavar_grouped)
  9.291 s (136092 allocations: 3.12 GiB)
julia> @btime datavar_mean = @.. thread=false mymean.($datavar_grouped)
  4.128 ns (0 allocations: 0 bytes)
julia> @btime datavar_mean = @.. thread=true mymean.($datavar_grouped)
  4.057 ns (0 allocations: 0 bytes)

It is a great improvement, but the difference between the threaded and not threaded versions are very small.

Oh, but when you’re using @.. you should not put a dot on the function call. That is, the 2nd and 3rd lines were correct your original benchmark. Sotty about the confusion! (I don’t know what you’re computing when you combine both @.. and the dot on the call, but it’s likely not what you intend to compute.)

I.e., here are the benchmarks you want to run:

julia> @btime datavar_mean = mymean.($datavar_grouped)
julia> @btime datavar_mean = @.. thread=false mymean($datavar_grouped)
julia> @btime datavar_mean = @.. thread=true mymean($datavar_grouped)

Sorry again, and thank you for your guidance. This seems to be hard to me, so thanks for your patience. I am new to Julia, and do not dominate a lot of things that I can see in many posts. Regarding the variable interpolation, I do not know much about it, I rather mimic codes that I see.

Well, here are the benchmarks (let’s see if this is the definitive attempt):

julia> @btime datavar_mean = mymean.($datavar_grouped)
  8.817 s (136092 allocations: 3.12 GiB)
julia> @btime datavar_mean = @.. thread=false mymean($datavar_grouped)
  8.586 s (136092 allocations: 3.12 GiB)
julia> @btime datavar_mean = @.. thread=true mymean($datavar_grouped)
  8.626 s (136092 allocations: 3.12 GiB)

So, not a lot of improvement. What could be the explanation?.

I’m not surprised that you’re not seeing much of a speedup with thread=false. The entire computation takes about 10 seconds, so any overhead from the broadcasting itself is negligible, and that’s the only thing @.. thread=false can help with.

It’s more surprising that there’s no difference at all with thread=true. Are you sure you started Julia with multiple threads? (Use julia --threads=n, replacing n with the number of threads you want to use.)

By the way, to help understand a bit more of what’s going on here, note that you could rewrite the first benchmark using the macro @. from Julia base, like this:

julia> @btime datavar_mean = @. mymean($datavar_grouped)
julia> @btime datavar_mean = @.. thread=false mymean($datavar_grouped)
julia> @btime datavar_mean = @.. thread=true mymean($datavar_grouped)

That should make the symmetry a bit clearer: instead of inserting a dot on the function call, you can use @. to insert dots for you. FastBroadcast.jl provides a drop-in replacement @.., which does the same thing, but sometimes with better performance, and optionally with multithreading enabled.

Thanks for your clarification, @danielwe .

When I test the number of threads from julia, I get:

julia> ENV["JULIA_NUM_THREADS"]
"14"

But I exited julia and, from the shell JULIA_NUM_THREADS was “1”.
I have started julia as julia --threads=14 (I have 16 physical cores), and run again the threaded version:

julia> @btime datavar_mean = @.. thread=true mymean($datavar_grouped)
  8.605 s (136092 allocations: 3.12 GiB)

So, no difference. I have also been attentive to the gnome-system-monitor in order to see if I can see some cpu threads working. The result is that only one of the cpu thread has been working at 100%, and the others, no more than 0.5% or 1%.

So, it seems that the threads option does not work with my data for some reason.

Just to make sure, can you also report what Threads.threadpoolsize() returns?

But my guess here is that @.. is ignoring the thread argument and falling back to the regular @., because it sees the argument type DimensionalData.DimGroupByArray and doesn’t know what to do about it.

In that case, I’m afraid I’m not able to help you any further. Hopefully, someone who’s familiar with DimensionalData and can chime in and advise on how to mimic its broadcast overloads in a loop or map call, which would then be easy to parallelize using OhMyThreads.jl.

Paging @Raf who looks to be the main developer of DimensionalData

Here it is:

julia> Threads.threadpoolsize()
14
julia> ENV["JULIA_NUM_THREADS"]
"14"
julia> @btime datavar_mean = @.. thread=true mymean($datavar_grouped)
  8.606 s (136092 allocations: 3.12 GiB)

On the other hand, I am reading the help for FastBroadcast.jl, and it says that it does not support ““dynamic broadcast”, i.e. when the arguments are not equal-axised or scalars.” and perhaps this also applies to reduction functions, like mean().

datavar_grouped is an array of arrays and mean.() is applied to each of those arrays, making a dimension reduction. The outer array does not change its shape, but the inner ones, do.

Anyway, I think that it is possible that FastBroadcast.jl doesn’t support reduction functions (like mean()) at all.

In general, FastBroadcast.jl has no problem multithreading the broadcasting of a function like mean over an array of arrays. Here’s with julia --threads=4 on my laptop:

julia> groups = [rand(10000) for _ in 1:1024];

julia> @btime @. mean($groups);
  6.536 ms (1 allocation: 8.12 KiB)

julia> @btime @.. thread=false mean($groups);
  6.644 ms (1 allocation: 8.12 KiB)

julia> @btime @.. thread=true mean($groups);
  3.168 ms (2 allocations: 8.17 KiB)

With thread=false, FastBroadcast only helps in certain cases where the workload for each element is small and the compiler struggles to optimize the generated code due to things like defensively checking for aliasing. This is not relevant here since the workload per element is quite large, which is why we don’t see a difference between the first and second lines. However, thread=true provides more than a 2x speedup.

Dynamic broadcasting is different, it’s when you do things like [1 2] .+ [3; 4] == [4 5; 5 6], that is, you add a row vector and a column vector and get a matrix in return. More generally, it’s when singleton dimensions are repeated to make the shapes of different arrays match. Since you only have one array in your broadcasting expression, datavar_grouped, this cannot possibly be a dynamic broadcast.

However, it’s clear from the errors in LoopVectorization.jl and OhMyThreads.jl that your datavar_grouped is not a regular array of arrays, but a special DimGroupByArray type created by DimensionalData.jl’s groupby. DimensionalData also seems to overload the broadcasting mechanism such that mean.(datavar_grouped) returns a DimArray instead of a plain array, as shown here: Group By | DimensionalData.jl. I think this is why FastBroadcast falls back to regular broadcasting and refuses to apply any optimizations, including multithreading. It’s supposed to be a drop-in replacement and has to be defensive to ensure that the results of @. mean(...) and @.. mean(...) are the same.

1 Like

Ok, I understand. So, I think that the next step is no other than to wait for @Raf to solve the problem.

I just gave DimensionalData.jl a spin, combining the tutorial at Group By | DimensionalData.jl with the reduction you’re trying to do, and this works perfectly with OhMyThreads.jl:

using DimensionalData, Dates, Statistics, OhMyThreads

tempo = range(DateTime(2000); step=Hour(1), length=365*24*2)
A = rand(X(1:0.01:2), Ti(tempo))
groups = groupby(A, Ti => month)

tmean(g) = mean(g; dims=:Ti)
tmap(tmean, groups)

Have you checked that all the packages are up to date in your environment? DimensionalData is currently at 0.27.5.

I suppose I may just have gotten lucky with this synthetic dataset.

1 Like

I test your code and I can run it with no problems. I also attempted to make the same but creating a Raster instead of a DimensionalData array:

using Rasters, DimensionalData, Dates, Statistics, OhMyThreads

tempo = range(DateTime(2000); step=Hour(1), length=365*24*2)
B = Raster(rand(Float64,101,length(tempo)), (X(1:0.01:2), Ti(tempo)))
groups = groupby(B, Ti => month)

tmean(g) = mean(g; dims=:Ti)
tmap(tmean, groups)

And it runs too without any problem. I think that the key can be in the type:
Type that runs (type of groups, the grouped raster B):

julia> typeof(groups)
DimensionalData.DimGroupByArray{Raster{Float64, 1, Tuple{X{DimensionalData.Dimensions.Lookups.Sampled{Float64, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}, DimensionalData.Dimensions.Lookups.ForwardOrdered, DimensionalData.Dimensions.Lookups.Regular{Float64}, DimensionalData.Dimensions.Lookups.Points, DimensionalData.Dimensions.Lookups.NoMetadata}}}, Tuple{Ti{DimensionalData.Dimensions.Lookups.Sampled{DateTime, StepRangeLen{DateTime, DateTime, Hour, Int64}, DimensionalData.Dimensions.Lookups.ForwardOrdered, DimensionalData.Dimensions.Lookups.Regular{Hour}, DimensionalData.Dimensions.Lookups.Points, DimensionalData.Dimensions.Lookups.NoMetadata}}}, SubArray{Float64, 1, Matrix{Float64}, Tuple{Base.Slice{Base.OneTo{Int64}}, Int64}, true}, Symbol, DimensionalData.Dimensions.Lookups.NoMetadata, Nothing}, 1, Tuple{Ti{DimensionalData.Dimensions.Lookups.Sampled{Int64, Vector{Int64}, DimensionalData.Dimensions.Lookups.ForwardOrdered, DimensionalData.Dimensions.Lookups.Irregular{Tuple{Nothing, Nothing}}, DimensionalData.Dimensions.Lookups.Points, DimensionalData.Dimensions.Lookups.NoMetadata}}}, Tuple{}, DimensionalData.OpaqueArray{Raster{Float64, 1, Tuple{X{DimensionalData.Dimensions.Lookups.Sampled{Float64, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}, DimensionalData.Dimensions.Lookups.ForwardOrdered, DimensionalData.Dimensions.Lookups.Regular{Float64}, DimensionalData.Dimensions.Lookups.Points, DimensionalData.Dimensions.Lookups.NoMetadata}}}, Tuple{Ti{DimensionalData.Dimensions.Lookups.Sampled{DateTime, StepRangeLen{DateTime, DateTime, Hour, Int64}, DimensionalData.Dimensions.Lookups.ForwardOrdered, DimensionalData.Dimensions.Lookups.Regular{Hour}, DimensionalData.Dimensions.Lookups.Points, DimensionalData.Dimensions.Lookups.NoMetadata}}}, SubArray{Float64, 1, Matrix{Float64}, Tuple{Base.Slice{Base.OneTo{Int64}}, Int64}, true}, Symbol, DimensionalData.Dimensions.Lookups.NoMetadata, Nothing}, 1, DimensionalData.DimSlices{Raster{Float64, 1, Tuple{X{DimensionalData.Dimensions.Lookups.Sampled{Float64, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}, DimensionalData.Dimensions.Lookups.ForwardOrdered, DimensionalData.Dimensions.Lookups.Regular{Float64}, DimensionalData.Dimensions.Lookups.Points, DimensionalData.Dimensions.Lookups.NoMetadata}}}, Tuple{Ti{DimensionalData.Dimensions.Lookups.Sampled{DateTime, StepRangeLen{DateTime, DateTime, Hour, Int64}, DimensionalData.Dimensions.Lookups.ForwardOrdered, DimensionalData.Dimensions.Lookups.Regular{Hour}, DimensionalData.Dimensions.Lookups.Points, DimensionalData.Dimensions.Lookups.NoMetadata}}}, SubArray{Float64, 1, Matrix{Float64}, Tuple{Base.Slice{Base.OneTo{Int64}}, Int64}, true}, Symbol, DimensionalData.Dimensions.Lookups.NoMetadata, Nothing}, 1, Tuple{Ti{Vector{Vector{Int64}}}}, Raster{Float64, 2, Tuple{X{DimensionalData.Dimensions.Lookups.Sampled{Float64, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}, DimensionalData.Dimensions.Lookups.ForwardOrdered, DimensionalData.Dimensions.Lookups.Regular{Float64}, DimensionalData.Dimensions.Lookups.Points, DimensionalData.Dimensions.Lookups.NoMetadata}}, Ti{DimensionalData.Dimensions.Lookups.Sampled{DateTime, StepRangeLen{DateTime, DateTime, Hour, Int64}, DimensionalData.Dimensions.Lookups.ForwardOrdered, DimensionalData.Dimensions.Lookups.Regular{Hour}, DimensionalData.Dimensions.Lookups.Points, DimensionalData.Dimensions.Lookups.NoMetadata}}}, Tuple{}, Matrix{Float64}, Symbol, DimensionalData.Dimensions.Lookups.NoMetadata, Nothing}}}, Symbol, Dict{Symbol, Any}}

Type that does not run (type of datavar_grouped, the grouped raster datavar):

julia> typeof(datavar_grouped)
DimensionalData.DimGroupByArray{Raster{Union{Missing, Float64}, 1, Tuple{Dim{:values, DimensionalData.Dimensions.Lookups.NoLookup{SubArray{Int64, 1, Base.OneTo{Int64}, Tuple{Vector{Int64}}, false}}}}, Tuple{Ti{DimensionalData.Dimensions.Lookups.Sampled{DateTimeNoLeap, SubArray{DateTimeNoLeap, 1, Vector{DateTimeNoLeap}, Tuple{UnitRange{Int64}}, true}, DimensionalData.Dimensions.Lookups.ForwardOrdered, DimensionalData.Dimensions.Lookups.Irregular{Tuple{DateTimeNoLeap, DateTimeNoLeap}}, DimensionalData.Dimensions.Lookups.Points, DimensionalData.Dimensions.Lookups.NoMetadata}}}, SubArray{Union{Missing, Float64}, 1, Matrix{Union{Missing, Float64}}, Tuple{Base.Slice{Base.OneTo{Int64}}, Int64}, true}, Symbol, DimensionalData.Dimensions.Lookups.Metadata{Rasters.GRIBsource, Dict{String, Any}}, Missing}, 1, Tuple{Ti{DimensionalData.Dimensions.Lookups.Sampled{Int64, Vector{Int64}, DimensionalData.Dimensions.Lookups.ForwardOrdered, DimensionalData.Dimensions.Lookups.Irregular{Tuple{Nothing, Nothing}}, DimensionalData.Dimensions.Lookups.Points, DimensionalData.Dimensions.Lookups.NoMetadata}}}, Tuple{}, DimensionalData.OpaqueArray{Raster{Union{Missing, Float64}, 1, Tuple{Dim{:values, DimensionalData.Dimensions.Lookups.NoLookup{SubArray{Int64, 1, Base.OneTo{Int64}, Tuple{Vector{Int64}}, false}}}}, Tuple{Ti{DimensionalData.Dimensions.Lookups.Sampled{DateTimeNoLeap, SubArray{DateTimeNoLeap, 1, Vector{DateTimeNoLeap}, Tuple{UnitRange{Int64}}, true}, DimensionalData.Dimensions.Lookups.ForwardOrdered, DimensionalData.Dimensions.Lookups.Irregular{Tuple{DateTimeNoLeap, DateTimeNoLeap}}, DimensionalData.Dimensions.Lookups.Points, DimensionalData.Dimensions.Lookups.NoMetadata}}}, SubArray{Union{Missing, Float64}, 1, Matrix{Union{Missing, Float64}}, Tuple{Base.Slice{Base.OneTo{Int64}}, Int64}, true}, Symbol, DimensionalData.Dimensions.Lookups.Metadata{Rasters.GRIBsource, Dict{String, Any}}, Missing}, 1, DimensionalData.DimSlices{Raster{Union{Missing, Float64}, 1, Tuple{Dim{:values, DimensionalData.Dimensions.Lookups.NoLookup{SubArray{Int64, 1, Base.OneTo{Int64}, Tuple{Vector{Int64}}, false}}}}, Tuple{Ti{DimensionalData.Dimensions.Lookups.Sampled{DateTimeNoLeap, SubArray{DateTimeNoLeap, 1, Vector{DateTimeNoLeap}, Tuple{UnitRange{Int64}}, true}, DimensionalData.Dimensions.Lookups.ForwardOrdered, DimensionalData.Dimensions.Lookups.Irregular{Tuple{DateTimeNoLeap, DateTimeNoLeap}}, DimensionalData.Dimensions.Lookups.Points, DimensionalData.Dimensions.Lookups.NoMetadata}}}, SubArray{Union{Missing, Float64}, 1, Matrix{Union{Missing, Float64}}, Tuple{Base.Slice{Base.OneTo{Int64}}, Int64}, true}, Symbol, DimensionalData.Dimensions.Lookups.Metadata{Rasters.GRIBsource, Dict{String, Any}}, Missing}, 1, Tuple{Ti{Vector{Vector{Int64}}}}, Raster{Union{Missing, Float64}, 2, Tuple{Dim{:values, DimensionalData.Dimensions.Lookups.NoLookup{SubArray{Int64, 1, Base.OneTo{Int64}, Tuple{Vector{Int64}}, false}}}, Ti{DimensionalData.Dimensions.Lookups.Sampled{DateTimeNoLeap, Vector{DateTimeNoLeap}, DimensionalData.Dimensions.Lookups.ForwardOrdered, DimensionalData.Dimensions.Lookups.Irregular{Tuple{DateTime, DateTime}}, DimensionalData.Dimensions.Lookups.Points, DimensionalData.Dimensions.Lookups.NoMetadata}}}, Tuple{}, Matrix{Union{Missing, Float64}}, Symbol, DimensionalData.Dimensions.Lookups.Metadata{Rasters.GRIBsource, Dict{String, Any}}, Missing}}}, Symbol, Dict{Symbol, Any}}