I would like to perform millions of parallel discrete optimization computations loosely like the following:
later = [9, 8, 4, 1, 0]
today = [1, 2, 3, 4, 5]
best_value = -Inf
best_index = -Inf
for index in 1:5
value = today[index] + 1/2 * later[index]
if value > best_value
best_index = index
best_value = value
end
end
julia> print(best_index)
2
In other words, each parallel task is too look at a collection of possible continuation values from a choice, a collection of immediate values from making the same choice, and then select the pair that maximizes the total value. It should then record the choice, and repeat. (This is part of a backwards recursive optimization of a utility function for an application in economics.)
On the one hand, at a low-level all of the computations are just math functions, ±*/ etc. and some exponentiation, so it seems this might be GPU-able.
On the other hand:
- It’s not clear to me whether GPUs can handle this type of logic even in the example I gave
- The actual problem involves more complicated data structures that are not fixed-size arrays; e.g., the map from the today choice into the continuation value is not just a matched index, but requires some computation to figure out which entry in the collection of continuation values should match a given today choice
- The actual problem involves custom types that are somewhat involved.
I have no prior GPU programming experience and am wondering if it is possible that a GPU would be able to tackle a problem like this and if so how one would setup an analogous MWE on a GPU to start adapting the code for it.
Grateful for any thoughts; thank you!