Are you certain this is exactly what’s happening? To me (and unless TensorOperations is somehow messing with this), it looks like Trm1 = nothing
should be legal and the problem would be with g_vvoo = nothing
because you type-declared it g_vvoo::Array{Float64,4} = deserialize("g_vvoo.jlbin")
. If you instead did g_vvoo = deserialize("g_vvoo.jlbin")::Array{Float64,4}
, it should be legal to later write g_vvoo = nothing
.
If you’re concerned about peak memory usage, why do you load all your data up-front? You should be able to notably reduce your peak memory usage by loading each variable immediately before you use it, doing all your operations with it, and then disposing of it afterward (either by reaching the end of a function or by reassigning it). That actually seems like an easy organization of this into mutliple subfunctions: make each subfunction compute the contribution from a single g_XXXX
. For example,
function compute_voov(T2::AbstractArray{<:Any, 4}, g_voov::AbstractArray{<:Any, 4})
@tensor begin
Trm[a_1,a_2,i_1,i_2] := - g_voov[a_1,i_3,i_1,a_3] * T2[a_2,a_3,i_3,i_2] # Trm2 term
Trm[a_1,a_2,i_1,i_2] += 2*g_voov[a_1,i_3,i_1,a_3] * T2[a_2,a_3,i_2,i_3] # Trm5 term
end
return Trm
end
# add the voov component
@tensor R2u[a_1,a_2,i_1,i_2] += compute_voov(T2, deserialize("g_voov.jlbin")::Array{Float64, 4})[a_1,a_2,i_1,i_2]
By the time this line has completed, both the deserialized variable and the generated Trm
are no longer accessible and can be GC’d.
You could also make a version that updates R2u
directly, like
function add_voov!(R2u::AbstractArray{<:Any, 4}, T2::AbstractArray{<:Any, 4}, g_voov::AbstractArray{<:Any, 4})
@tensor begin
R2u[a_1,a_2,i_1,i_2] += - g_voov[a_1,i_3,i_1,a_3] * T2[a_2,a_3,i_3,i_2] # Trm2 term
R2u[a_1,a_2,i_1,i_2] += 2*g_voov[a_1,i_3,i_1,a_3] * T2[a_2,a_3,i_2,i_3] # Trm5 term
end
return R2u
end
but I don’t anticipate this making a notable difference to performance.