I have the following example of a structure for a data type (real type has more fields):
mutable struct Worker
id :: Int # name
T :: Int # number of periods observed
t :: Int # current period
w :: Vector{Float64} # vector of wages in each period
end
where the data is such that T is potentially different for each Worker. The computational task involves evaluation of a likelihood function, and it is an operation that is conceptually similar to summing over w for each worker in an array W of Worker, and then summing the result for each worker:
julia> typeof(W)
Array{Worker}
result = sum( sum( worker.w ) for worker in W )
Given that w is of different length for each worker, I cannot store the w data in a rectangular N,T = size(W) matrix, which would make it easier to parallelize the workload. In short, I have a list W and differently-long lasting computational tasks for each. Assume the list is long, like several million elements (Workers).
I would like to offload the computational task (sum(w)) to a GPU. I have been looking around and found the StructArrays package. From the last section of the readme I seem to gather that this could handle a non-standard datastructure like this one, but I’m not sure it’s the best solution. Any advice on this greatly appreciated!