I’m conducting a set of simulations where each scenario has a set of parameters that can vary. These are scalars (e.g., random number seed, sample size, dropout rate, number of simulations), vectors (e.g., a set of evaluation times), and matrices (e.g., event type by survival time per stage of event type). The scenarios are not necessarily regular in organization, and the number and type of parameters is evolving over time.
Currently, the scenarios are indexed in a CSV file, one scenario per line. This is read in and parsed as needed (e.g., reading a matrix in based on a filename in the CSV file). I have been using a DataFrame to hold each parsed row, but I think it would be useful to switch to a structure. Then, instead of do_one(param1, param2, param3, etc.) I can call do_one(scenario). This will be make it easier to drag all the parameters around to each function, and I think that this will be a lot more flexible to change.
The current loop looks like:
for i in 1:n ### number of scenarios
for j in 1:m ### number of simulations
result = do_one(param1, param2, param3, etc.) ### to become do_one(scenario)
### do stuff with result
The next step will be to parallelize this effort, probably at the first loop as a starting point.
Generally, how can I set up an array of parameter structures (using, e.g., Parameters.jl), and how can I can access it correctly inside the first loop. For example, could I do something like: scenario = scenarios[i], where scenarios is my array of parameter structures that I cobble together from the CSV input?