Unfortunately space will become an issue if the models are large and run for millions of time-steps. My current ad-hoc solution is to use a dictionary of dataframes. Each dictionary key is a column name from the original dataframe (i.e. “A”, “B” and “C”) and the values are new dataframes that look like this:
Row │ time position value
│ Int64 Int64 Int64
─────┼────────────────────────
1 │ 21 12 500
2 │ 27 3 400
3 │ 52 12 500
4 │ 55 9 0
5 │ 59 9 500
6 │ 71 5 400
7 │ 93 11 500
where “time” is when the change occurred, “position” is where in the column the change occured, and “value” is the new value matching the type of the orginal column.
To get the dataframe back at any given time step, I use this function:
function getState(t, columnDict, initialState)
out = deepcopy(initialState)
#loop through columns and rows in initialState
for (property, df) in pairs(columnDict)
for row in eachrow(df)
if row.time > t
break
end
out[property][row.position] = row.value
end
end
return out
end
It’s a little messy but it works