It did work. replace! changes the array in-place. This means it doesn’t change it’s type. Use replace (with no !), to make a new array that is just Array{Float32, 3}
Strange, benchmarked it here to be about 2 times faster. Julia 1.7, Win11. Will check again later. It might have to do with inplace function benchmarking.
@roflmaostoc, as a paranthesis to this post and in order to check the efficiency of the proposed for loop compared to matlab style bitmap indexing, I performed the following benchmarks for inplace functions using setup:
capnan0!(A,v) = A[A .> v] .= NaN
function capnan1!(A, v)
for i in eachindex(A)
(A[i] > v) && (A[i] = NaN)
end
end
using BenchmarkTools
@btime capnan0!(A, 100) setup=(A=200*rand(360,180,1200)) evals=1 # 180 ms (8 allocations: 305 MiB)
@btime capnan1!(A, 100) setup=(A=200*rand(360,180,1200)) evals=1 # 44 ms (0 allocations: 0 bytes)
As you can see, the for loop is ~4x faster, assuming the benchmarks are done correctly.
see versioninfo() details here
julia> versioninfo()
Julia Version 1.7.0
Commit 3bf9d17731 (2021-11-30 12:12 UTC)
Platform Info:
OS: Windows (x86_64-w64-mingw32)
CPU: Intel(R) Core™ i7-1065G7 CPU @ 1.30GHz
WORD_SIZE: 64
LIBM: libopenlibm
LLVM: libLLVM-12.0.1 (ORCJIT, icelake-client)
Environment:
JULIA_PKG_USE_CLI_GIT = true
JULIA_STACKFRAME_FUNCTION_COLOR = blue
JULIA_WARN_COLOR = cyan
JULIA_EDITOR = code.cmd -g
JULIA_NUM_THREADS = 8
I see what was going on:
Previous benchmarks were affected by branch prediction (or stuff like this) since from the second pass on, the function was not doing something effectively (in place update).
This agrees with my findings that the for loops solution is way faster. For example in this function I left a note that it’s 5x faster (seeking for zeros)
I see now, sorry for misunderstanding. The pattern evokes some lazy behaviour, maybe copy is not really doing the copy until the object is forced to? Or the OS is giving a pointer and a promise of memory but does allocate it until use? (I had seen this behavior a lot in C++ with very large arrays of undefined values.) The problem of this last hypothesis is that the Array has values inside already, no?
Yes, but the problem is, why when you refill the Array with random values the first run takes a lot of memory again? What is the extra 300MiB allocation?