I’m transitioning from MATLAB to Julia and I’m wondering if there are any best-practices for optimizing for memory and computational time when writing simulations.
Specifically, I’m designing a simple verlet integration for the motion of N particles where current_positions
is a N by 3 matrix where each row represents particle n and each column represents the x,y,z position respectively.
A simplified form looks like this:
# Initialize a 3D array to hold number_of_steps slices of positions
number_of_steps = # some integer value
N = # number of particles
positions = Array{Float64}(undef, N, 3, number_of_steps)
current_positions = # N by 3 matrix of initial positions
steps = []
for current_step = 1:number_of_steps
# Update the positions
current_positions = # some scheme of "current_positions"
# Update the list of positions and steps
steps = push!(steps, current_step)
positions[ :, :, current_step] = current_positions
end
What’s left is an array of N
by 3 matricies where each “slice” is the data at a tilmestep. This works for what I need, as my intent is to have access to all the positions of all the particles for each time step. But I’m wondering if there is a way I could optimize the way I store the positions at each step?
The advantage to this is that I can slice along any dimension to get the positions at any step with simple indexing, but I assume that as my simulation gets large, the array can become very large and consume a significant amount of memory. This is because, as I understand it, an array is contiguous in memory.
The other options I could think of is making a vector of matrices Vector{Matrix{Float64}}()
where each matrix in that vector is not continuous in memory. So things would prevent a single address in memory from become too large?
Thank you for your patience with educating me on this.