Why are there allocations in my loop depending upon how I set a variable?

A little demo:
s

The thing I am trying to wrap my head around is that I observed that a broadcast to combine three vectors would allocate in a loop:

@time for step in 1:nsteps
        @. U1 = U0 + dt*V0 + ((dt^2)/2)*A0; 
end

yields

  3.080494 seconds (130.00 k allocations: 4.883 MiB)    

The number of allocations grows linearly with the upper bound of the range (nsteps).
The code is not global, the loop exists inside of function.

The problem is that this doesn’t show in a trivial code, only in my original code.

I made these observations: Recall that the loop refers to the variable dt. The allocation trigger is this: If I compute omega_max from the solution of an eigenvalue problem, the loop would allocate.

    evals, evecs, nconv = eigs(K, M; nev=1, which=:LM, explicittransform=:none)
    @show omega_max = sqrt(evals[1])
    # omega_max = 2.76450e+06 
    @show dt = Float64(0.99* 2/omega_max)
    @show typeof(dt)

On the other hand, if I set omega_max to a fixed value, the loop does not allocate.

    evals, evecs, nconv = eigs(K, M; nev=1, which=:LM, explicittransform=:none)
    # @show omega_max = sqrt(evals[1])
    omega_max = 2.76450e+06 
    @show dt = Float64(0.99* 2/omega_max)
    @show typeof(dt)

Did I mention that it was weird?

1 Like

Are any of the variables in the loop globals?

In general, please post minimal working examples (runnable code), not snippets.

No globals anywhere. The problem with MWE is that the behavior I observe is very sensitive. I tried to create a synthetic MWE, but the behavior disappeared.

Start with a working example and start deleting things until the problem disappears …

Unless you make something that others can reproduce it’s going to be extremely hard to help you.

5 Likes

Here is the minimal working example. Unfortunately, it can’t be totally minimal, the problem would disappear.

Clone https://github.com/PetrKryslUCSD/FinEtoolsFlexStructuresTutorials.jl.
I use Julia 1.6.5 here, but the same problem manifests with 1.7.1.
Activate, instantiate the project, run the file https://github.com/PetrKryslUCSD/FinEtoolsFlexStructuresTutorials.jl/blob/main/docs/src/tutorials/puzzle.jl

This isn’t so strange: it looks like a type instability. I bet @code_warntype would show omega_max::Any when you’re computing it from sqrt(evals[1]) (likely because evals::Any and likely because one of those args to eigs is unstable).

When you just hardcode it to be omega_max = 2.76450e+06, of course Julia knows it’s a Float64 and thus can optimize it appropriately.

2 Likes

Wouldn’t be nice if it was so simple. evals’s type is Vector{Float64}.

Are you looking at @code_typed? Or that typeof that you print out? I’m talking about the former.

I checked @code_warntype:

    %216 = Main.typeof(evals)::DataType                                                                  
│    %217 = Core.tuple(%216, evals)::Tuple{DataType, Any}                                                 
│           (value@_11 = %217)                                                                            
│    %219 = Base.repr(%217)::String                                                                       
│           Base.println("(typeof(evals), evals) = ", %219)                                               
│           value@_11                                                                                     
│           (evals = Base.vect(7.64248e+12))                                                              
│    %223 = Base.getindex(evals::Vector{Float64}, 1)::Float64                                             
│    %224 = Main.sqrt(%223)::Float64                                                                      
│           (omega_max = %224)                                                                            
│           (value@_10 = %224)                                                                            
│    %227 = Base.repr(%224)::String                                                                       
│           Base.println("omega_max = sqrt(evals[1]) = ", %227)                                           
│           value@_10                                                                                     
│    %230 = (9.90000e-01 * 2)::Core.Const(1.98000e+00)                                                    
│    %231 = (%230 / omega_max)::Float64                                                                   
│    %232 = Main.Float64(%231)::Float64                                                                   
│           (dt = %232)                                                                                   
│           (value@_9 = %232)                  

dt’s type is known.

This is all just shots in the dark because there’s not an MWE. Does @code_warntype give you any red flags?

It still may be something similar with constant propagation, wherein Julia can optimize with a known constant and remove a downstream type instability.

Sorry, the “minimal” WE is mentioned above. I have nothing leaner. And, yes, you were right about evals: %217 = Core.tuple(%216, evals)::Tuple{DataType, Any}.

function f()
    evals, evecs, nconv = eigs(rand(10,10), rand(10,10); nev=1, which=:LM, explicittransform=:none)
end

@code_warntype optimize=true f()
Variables
  #self#::Core.Const(f)
  @_2::Int64
  nconv::Any
  evecs::Any
  evals::Any

eigs return type only infers as Tuple, that is your culprit.

1 Like

Hmm, not convinced. dt is used inside the loop, and its type is known.

What happens if you introduce a function barrier, i.e. separate the eigenvalue solving from the loop?

Created a function barrier: the allocations are gone. But how they originate it in the first place is a mystery to me.

Please see above.

You see a similar type instability in a function that mimics your code, a type assertion (commented out), or a function barrier, helps:

function f()
    evals, evecs, nconv = eigs(rand(10,10), rand(10,10); nev=1, which=:LM, explicittransform=:none)
    dt=sqrt(evals[1])
    #dt::ComplexF64=sqrt(evals[1])
    s=zero(dt)
    for i in 1:1000
        s+=dt
    end
    s
end

@code_warntype optimize=true f()
Variables
  #self#::Core.Const(f)
  @_2::Union{Nothing, Tuple{Int64, Int64}}
  @_3::Int64
  s::Any
  dt::Any
  nconv::Any
  evecs::Any
  evals::Any
  i::Int64

Body::Any

Also, the the return of evals[1] from eigs would be either Float64 or ComplexF64 depending on whether the matrix is symmetric. So unless the symmetry information is communicated correctly in the type system evals would not infer correctly, which it doesn’t.

Hope this helps…

4 Likes

Good point. Not sure why in my case dt is reported as a float by @code_warntype…?

Edit: I must have confused the code versions that I was running: when dt is evaluated from the eigenvalue solution, the correct Any type is deduced by @code_warntype. @raminammour , @mbauman were right.

Thank you all helping me get to the bottom of the issue.