Back and forth transfer of data between GPU neural net and CPU simulation

Hello,

I have a neural network that interacts with a simulation. To accelerate its training, the network is transfered to my GPU. I implemented my simulation as a structure that handles other structures. In my toy example below, understand my SimEnvironment as the simulation structure of some sort of factory. I want to produce something to increase the level (currentlevel) but the production is limited by the capacity of a set of machines. Information is a vector that the nn is supposed to use to compute the production quantity.

To make this work I have to transfer the information vector to the gpu so that my network can read it and compute an output (prod). To execute the action decide by the nn, I have to transfer the prod vector back to the gpu. This back and forth transfer is not efficient but I don’t think it would work better to do the simulation on the GPU, would it ? Is there some kind of guidelines to improve these sorts of interactions ?

nn = Chain(Dense(3,1)) |> gpu

mutable struct Machine
    capacity::Float32
end

mutable struct SimEnvironment
    machines::Array{Machine,1}
    currentlevel::Float32
    information::Array{Float32}
end

sim = SimEnvironment([Machine(5.0f0), Machine(7.0f0)], 0.0f0, [4.0f0, 90.0f0, 10.0f0])

function produce!(sim::SimEnvironment, production::Array{Float32,1})
    production[1] = min(production[1], getfield.(sim.machines, :capacity)...)
    sim.currentlevel += production[1]
end

input = sim.information |> gpu
prod = nn(input) |> cpu
produce!(sim, prod)