Is println type instable?

See, there are Any’s poping out in the middle of the journey.

julia> using JET

julia> @report_opt println(["Bla", "Bla"])
═════ 4 possible errors found ═════
┌ println(xs::Vector{String}) @ Base ./coreio.jl:4
│┌ println(io::IO, xs::Vector{String}) @ Base ./strings/io.jl:75
││┌ print(::IO, ::Vector{String}, ::String) @ Base ./strings/io.jl:46
│││┌ print(io::IO, x::Vector{String}) @ Base ./strings/io.jl:35
││││┌ show(io::IO, X::Vector{String}) @ Base ./arrayshow.jl:486
│││││┌ show_vector(io::IO, v::Vector{String}) @ Base ./arrayshow.jl:515
││││││┌ show_vector(io::IO, v::Vector{String}, opn::Char, cls::Char) @ Base ./arrayshow.jl:515
│││││││┌ typeinfo_prefix(io::IO, X::Vector{String}) @ Base ./arrayshow.jl:588
││││││││┌ kwcall(::NamedTuple{(:context,), <:Tuple{IO}}, ::typeof(sprint), f::typeof(Base.show_type_name), args::Core.TypeName) @ Base ./strings/io.jl:107
│││││││││┌ pairs(nt::NamedTuple) @ Base.Iterators ./iterators.jl:279
││││││││││┌ (Base.Pairs{Symbol})(data::NamedTuple, itr::Tuple{Vararg{Symbol}}) @ Base ./essentials.jl:343
│││││││││││┌ eltype(::Type{A} where A<:NamedTuple) @ Base ./namedtuple.jl:237
││││││││││││┌ nteltype(::Type{NamedTuple{names, T}} where names) where T<:Tuple @ Base ./namedtuple.jl:239
│││││││││││││┌ eltype(t::Type{<:Tuple{Vararg{E}}}) where E @ Base ./tuple.jl:208
││││││││││││││┌ _compute_eltype(t::Type{<:Tuple{Vararg{E}}} where E) @ Base ./tuple.jl:231
│││││││││││││││┌ afoldl(op::Base.var"#54#55", a::Any, bs::Vararg{Any}) @ Base ./operators.jl:544
││││││││││││││││┌ (::Base.var"#54#55")(a::Any, b::Any) @ Base ./tuple.jl:235
│││││││││││││││││┌ promote_typejoin(a::Any, b::Any) @ Base ./promotion.jl:172
││││││││││││││││││┌ typejoin(a::Any, b::Any) @ Base ./promotion.jl:127
│││││││││││││││││││ runtime dispatch detected: Base.UnionAll(%403::Any, %405::Any)::Any
││││││││││││││││││└────────────────────
│││││││││││││││┌ afoldl(op::Base.var"#54#55", a::Any, bs::Vararg{Any}) @ Base ./operators.jl:545
││││││││││││││││┌ (::Base.var"#54#55")(a::Type, b::Any) @ Base ./tuple.jl:235
│││││││││││││││││┌ promote_typejoin(a::Type, b::Any) @ Base ./promotion.jl:172
││││││││││││││││││┌ typejoin(a::Type, b::Any) @ Base ./promotion.jl:127
│││││││││││││││││││ runtime dispatch detected: Base.UnionAll(%398::Any, %400::Any)::Any
││││││││││││││││││└────────────────────
│││││││││┌ sprint(f::typeof(Base.show_type_name), args::Core.TypeName; context::IO, sizehint::Int64) @ Base ./strings/io.jl:112
││││││││││┌ show_type_name(io::IOContext{IOBuffer}, tn::Core.TypeName) @ Base ./show.jl:1042
│││││││││││ runtime dispatch detected: Base.isvisible(%32::Symbol, %89::Module, %83::Any)::Bool
││││││││││└────────────────────
┌ println(xs::Vector{String}) @ Base ./coreio.jl:4
│ runtime dispatch detected: println(%1::IO, %2::Vector{String})::Nothing
└────────────────────

I suppose that’s by design, because specializing every print call for the types of its arguments seems like a waste. Not sure though.

EDIT: the Vararg print method, which println is based on, does not specialize for the types of its arguments. One related thread: Would it be interesting to optimize print(io, xs...) to be type stable?

For those who aren’t aware: Be aware of when Julia avoids specializing

Thanks for the pointer. Though I find a bit strange that even on a plain string, there still one dynamic dispatch

julia> @report_opt println("BlaBla")
═════ 1 possible error found ═════
┌ println(xs::String) @ Base ./coreio.jl:4
│ runtime dispatch detected: println(%1::IO, %2::String)::Nothing
└────────────────────

This simple case doesn’t have run time dispatch, FWIW:

julia> f = open("/tmp/oijsad", "w")
IOStream(<file /tmp/oijsad>)

julia> @report_opt print(f, "s")
No errors detected

Even replacing f with stdout will cause run time dispatch, at least when it’s a TTY.

It would be nice if JET has an option to filter out common type instabilities like println, which is unlikely to be of interest to users. Currently JET can only omit type instabilitues from entire modules, rather than individual functions or individual line numbers specified by users.

1 Like

Perhaps.

I kind of disagree. When I’m calling @report_opt on a function, there’s two things that I might be interested in:

  1. Is there any run time dispatch. In this case I definitely do want to know about any print calls that cause run time dispatch, they’re no different than the others.
  2. What/where is the cause of the run time dispatch (on the call stack). In this case I’m probably not interested in print calls, so it would be good to collapse them, but show an entry anyway as I still want to know that there was some run time dispatch.

I am not sure how bad it is to have runtime dispatch in println in the context of performance. I guess nobody really wants to do thousands of println calls in performance critical code and outputting to a terminal prompt already has a heavy footprint compared to the performance “loss” of runtime dispatch.

Of course, your mileage may vary :wink:

Note,

@time println(["Bla", "Bla"])
["Bla", "Bla"]
  0.000212 seconds (32 allocations: 864 bytes)

It's an array that could not have only 2 entries (some alternative fixed-sized array type might be better. Vs.:

julia> @time println("Bla", "Bla") # Not strictly the same
BlaBla
  0.000092 seconds (6 allocations: 112 bytes)

julia> @time println("[Bla, Bla]") # One fewer allocations, not because I added brackets:
Bla
  0.000081 seconds (5 allocations: 80 bytes)

I tracked down what println does, it calls print which has one fewer allocations, etc. until it calls write:
julia> @time write(stdout, "[\"Bla\", \"Bla\"]\n")  # Adding a bit more to emulate your output, not why allocations are reduced:
["Bla", "Bla"]
  0.000055 seconds (1 allocation: 16 bytes)

I haven’t tracked down the last allocation, but see vs.

julia> using StaticCompiler, StaticTools

julia> @time println(c"Hello, world!")
Hello, world!
  0.000021 seconds

Not that bad but for example, I have this print_kwarg_opts that is used in a help function that prints a summary of of the kwargs of GMT.jl modules. For example:

julia> gmthelp(plot)
Option: R, or region, or limits => GMTgrid | NamedTuple |Tuple | Array | String
Option: J, or proj, or projection => NamedTuple | String
Option: B, or frame, or axes, or axis, or xaxis, or yaxis, or zaxis, or axis2, or xaxis2, or yaxis2 => NamedTuple | String
Option: JZ, or Jz, or zscale, or zsize => String | Number
Option: a, or e, or f, or g, or p, or t, or w, or params => (Common options)
Option: l, or legend => (text=?(), hline=?(+D), vspace=?(+G), header=?(+H), image=?(+I), line_text=?(+L), n_cols=?(+N), ncols=?(+N), ssize=?(+S), start_vline=?(+V), end_vline=?(+v), font=?(+f), fill=?(+g), justify=?(+j), offset=?(+o), frame_pen=?(+p), width=?(+w), scale=?(+x), )
Option: f, or colinfo, or coltypes, or coltype => (Common option not yet expanded)
Option: D, or shift, or offset => Tuple | String | Number | Bool [Possibly not yet expanded]
Option: I, or intens => Tuple | String | Number | Bool [Possibly not yet expanded]
Option: N, or no_clip, or noclip => Tuple | String | Number | Bool [Possibly not yet expanded]
Option: i, or incols, or incol => (Common option not yet expanded)
Option: b, or bi, or binary, or binary_in => (ncols=?(), type=?(), swapp_bytes=Any(w), little_endian=Any(+l), big_endian=?(+b), )
Option: di, or nodata_in => nodata_in=val
Option: h, or header => (Common option not yet expanded)
Option: yx, or swap_xy => (Common option not yet expanded)

But every time a do a @report_opt on a function that uses it, it adds 4 runtime dispatches and a not so small stack trace. Furthermore, I’ve been noticing that more of those we have larger is pre-compile cache file, and they are not small at all.

││┌ print_kwarg_opts(symbs::Matrix{Symbol}, mapa::String) @ GMT C:\Users\j\.julia\dev\GMT\src\common_options.jl:4691
│││┌ collect(itr::Base.Generator{Matrix{Symbol}, GMT.var"#240#242"}) @ Base ./array.jl:832
││││┌ iterate(::Base.Generator{Matrix{Symbol}, GMT.var"#240#242"}) @ Base ./generator.jl:47
│││││┌ (::GMT.var"#240#242")(x::Symbol) @ GMT ./none:0
││││││┌ format(f::Printf.Format{Base.CodeUnits{UInt8, String}, Tuple{Printf.Spec{Val{'s'}}}}, args::Symbol) @ Printf C:\workdir\usr\share\julia\stdlib\v1.10\Printf\src\Printf.jl:940
│││││││┌ computelen(substringranges::Vector{UnitRange{Int64}}, formats::Tuple{Printf.Spec{Val{'s'}}}, args::Tuple{Symbol}) @ Printf C:\workdir\usr\share\julia\stdlib\v1.10\Printf\src\Printf.jl:904
││││││││┌ plength(f::Printf.Spec{Val{'s'}}, args::Tuple{Symbol}, argp::Int64) @ Printf C:\workdir\usr\share\julia\stdlib\v1.10\Printf\src\Printf.jl:864
│││││││││┌ rmdynamic(spec::Printf.Spec{Val{'s'}}, args::Tuple{Symbol}, argp::Int64) @ Printf C:\workdir\usr\share\julia\stdlib\v1.10\Printf\src\Printf.jl:306
││││││││││┌ Printf.Spec{Val{'s'}}(leftalign::Bool, plus::Bool, space::Bool, zero::Bool, hash::Bool, width::Symbol, precision::Int64, dynamic_width::Bool, dynamic_precision::Bool) @ Printf C:\workdir\usr\share\julia\stdlib\v1.10\Printf\src\Printf.jl:30
│││││││││││ runtime dispatch detected: convert(::Int64, width::Symbol)
││││││││││└────────────────────
││││││││││┌ Printf.Spec{Val{'s'}}(leftalign::Bool, plus::Bool, space::Bool, zero::Bool, hash::Bool, width::Int64, precision::Symbol, dynamic_width::Bool, dynamic_precision::Bool) @ Printf C:\workdir\usr\share\julia\stdlib\v1.10\Printf\src\Printf.jl:30
│││││││││││ runtime dispatch detected: convert(::Int64, precision::Symbol)
││││││││││└────────────────────
││┌ print_kwarg_opts(symbs::Matrix{Symbol}, mapa::String) @ GMT C:\Users\j\.julia\dev\GMT\src\common_options.jl:4707
│││┌ println(xs::String) @ Base ./coreio.jl:4
││││ runtime dispatch detected: println(%1::IO, %2::String)::Nothing
│││└────────────────────
  • If a function needs to be performant, why print from it?

  • If a function doesn’t need to be performant, why are you calling @report_opt on it?

  • Maybe you want to separate your functions into high-level wrappers that call performant kernel functions?

  • I don’t know your code, but, in a language like Julia, a “usage” function seems like the wrong approach. Either doc strings or custom exceptions are usually the better choice, I think.

  • If the runtime dispatch really bothers you for printing, you could probably write specialized printing functions based on print(::IO, ::String).

1 Like

What I am trying to do is to reduce the total number of dynamic dispatches in an attempt to reduce the compile times and compiled cache files. println came out because is casts a long stack trace in @report_opt that I would rather not have in there, but otherwise it poses no other performance issue. Code using it normally only trigger its call under a specific user request, not always. However, to JET it is always seen.