How could I force the release of the memory allocated by X here?
(dimensions chosen to bomb if the memory allocated by X does not get freed)
function testme()
X = rand( 14_000_000_000 )
Y = sum( X )
X = nothing
GC.gc() # why does the memory not get freed here?
Z = rand( 14_000_000_000 )
Y += sum( Z )
return Y
end
function tester()
Y = testme()
return Y
end
println( tester() )
The following seems to release memory for the GC.gc() call at the toplevel
function testme()
X = rand( 2_000_000_000 )
Y = sum( X )
X = nothing
GC.gc() # why does the memory not get freed here?
Z = rand( 2_000_000_000 )
Y += sum( Z )
return Y
end
function tester()
Y = testme()
return Y
end
println( tester() )
GC.gc()
as visible in Task Manager. So is it a scope problem?
Yes, there are a number of changes that one could make that would make the memory get released, but I would like to understand more fundamentally what would work, what wouldn’t, and why.
(Other examples include sticking @sync in front of GC.gc() and calling garbage collection in the calling function)
Thanks @Ronis_BR . Unfortunately, in my actual program there are a lot of other objects that would be defined within your inner scope that I would want to survive. Returning all that stuff is ugly.
Looking at the result of @code_typed testme(), it doesn’t appear that X = nothing actually does anything. It seems that it is removed by the optimizer because (aside from GC) it has no observable effect.
@aviatesk is this something that EA could help with? Or is this more an issue of optimization not recognizing (intentionally or accidentally) that X = nothing has an effect w.r.t GC?