Weird. I can copy and paste it into a REPL just fine:
julia> using BenchmarkTools
julia> v = rand(100);
julia> f(v) = sum(v .* v')
f (generic function with 1 method)
julia> g(v) = mapreduce(identity, +, Broadcast.broadcasted(*, v, v'))
g (generic function with 1 method)
julia> h(v) = mapreduce(identity, +, Base.Broadcast.materialize(Broadcast.broadcasted(*, v, v')))
h (generic function with 1 method)
julia> @benchmark f($v)
BenchmarkTools.Trial:
memory estimate: 78.20 KiB
allocs estimate: 2
--------------
minimum time: 7.319 μs (0.00% GC)
median time: 9.342 μs (0.00% GC)
mean time: 12.233 μs (18.48% GC)
maximum time: 8.865 ms (99.77% GC)
--------------
samples: 10000
evals/sample: 4
julia> @benchmark g($v)
BenchmarkTools.Trial:
memory estimate: 64 bytes
allocs estimate: 3
--------------
minimum time: 13.595 μs (0.00% GC)
median time: 14.507 μs (0.00% GC)
mean time: 15.796 μs (0.00% GC)
maximum time: 47.980 μs (0.00% GC)
--------------
samples: 10000
evals/sample: 1
julia> @benchmark h($v)
BenchmarkTools.Trial:
memory estimate: 78.20 KiB
allocs estimate: 2
--------------
minimum time: 6.870 μs (0.00% GC)
median time: 9.014 μs (0.00% GC)
mean time: 11.420 μs (19.74% GC)
maximum time: 8.638 ms (99.73% GC)
--------------
samples: 10000
evals/sample: 4
It removes the julia>
s, as well as the output (replacing it with new output). I’d been taking advantage of that to make it easy to share results, and for others to still be able to reproduce it for a long time.
However, I wasn’t aware that many people were unfamiliar with that feature until this thread.
If it isn’t working for you, that sounds like a bug.