Slow deletion loop on huge DataFrame

I want to delete certain elements in an ordered DataFrame dim=(1.5 million,14). If the conditions for deletion are met, the code to check if the next element is also up for deletion (if the string in [x,1] is identical) looks like this:

Code

while df[k-l,1]==df[k-l+1,1]
deleteat!(df,k-l+1)
l=l+1;
k=k+1;
if k==upperBoundary-1
break
end
end
deleteat!(df,k-l)
k+=1;
l+=1;
end

I checked with println that this part takes about 1 s (was faster for me than looking up the @time syntax again, my bad).
This isn’t scalable.
Suggestions?

Edit: My indents are gone. Sorry :frowning:

Try calculating all the indices you want to delete and do the deletion in one call.

job was done within seconds.