Here is what I want to do:
using Logging
using Statistics
mutable struct MyLogger <: AbstractLogger
log
min_level
end
MyLogLevel= LogLevel(-1)
MyLogger() = MyLogger([], MyLogLevel)
function Logging.handle_message(logger::MyLogger, level, message, _module, group, id, file, line; kargs...)
push!(logger.log, kargs.data)
end
Logging.shouldlog(logger::MyLogger, level, _module, group, id) = level == MyLogLevel
Logging.min_enabled_level(logger::MyLogger) = MyLogLevel
logger = MyLogger()
function compute()
data = rand(10000)
@logmsg MyLogLevel "Compute" mean=mean(data)
end
function no_log_compute()
data = rand(10000)
end
with_logger(logger) do
println("With Logger")
@btime compute()
end
println("No logger")
@btime compute()
println("No log compute")
@btime no_log_compute()
Here I have a custom logger, which log something which require computation, and push to my logger.
I wish that if I don’t have with_logger(logger)
, it should perform the same as I didn’t write the @logmsg
in the first place, meaning don’t run anything in the key value pairs.
But in fact, I get:
With Logger
8.000 μs (8 allocations: 78.45 KiB)
No logger
9.000 μs (11 allocations: 78.62 KiB)
No log compute
6.340 μs (2 allocations: 78.20 KiB)
Which means, even the loglevel is above the target, the log is not logged into anything, the key value pairs still get executed anyway.
And the second version is even slower than the with logger version…
Are you sure about that? I think you are just seeing the overhead of checking if the logger should log or not. Try e.g.
julia> @debug "hello" error("not called") # no error
julia> @info "hello" error("not called") # error
┌ Error: Exception while generating log record in module Main at REPL[5]:1
And here is a measure of the overhead, which seems to align with your measurements:
julia> using BenchmarkTools, Logging
julia> infologger = SimpleLogger(devnull, Logging.Info);
julia> debuglogger = SimpleLogger(devnull, Logging.Debug);
julia> function f()
@debug "hello"
end;
julia> global_logger(infologger);
julia> @btime f()
65.839 ns (0 allocations: 0 bytes)
julia> global_logger(debuglogger);
julia> @btime f()
2.247 μs (22 allocations: 1.22 KiB)
It is not a constant overhead, since you can make the mean(data)
more complicated, to see if the time grows:
function compute()
data = ones(10000)
@logmsg MyLogLevel "Compute" mean=mean(data .* data .* exp.(data) .^ 2)
end
function no_log_compute()
data = ones(10000)
end
with_logger(logger) do
println("With Logger")
@btime compute()
end
println("No logger")
@btime compute()
println("no log compute")
@btime no_log_compute()
And it does:
With Logger
55.500 μs (10 allocations: 156.66 KiB)
No logger
57.500 μs (13 allocations: 156.83 KiB)
no log compute
6.233 μs (2 allocations: 78.20 KiB)
julia> @time with_logger(SimpleLogger(devnull, Logging.Info)) do
@debug "sleeping..." sleep(5)
end
0.069389 seconds (97.56 k allocations: 5.215 MiB, 99.07% compilation time)
julia> @time with_logger(SimpleLogger(devnull, Logging.Debug)) do
@debug "sleeping..." sleep(5)
end
5.053603 seconds (97.60 k allocations: 5.217 MiB, 0.99% compilation time)
Can you explain why my code shows it does grow with the it?
My real world case is I want to log a CuArray, which means copy it from device to host, it takes around 300us.
Can you try without modifying global logger or with a logger?
After some test I found the problem is that the default logger of VSCode is a VSCodeServer.VSCodeLogger(), and it’s min_enabled_level is just -1.