Towards broadcast over combinations of sparse matrices and scalars

In #19239, @pabloferz, @stevengj and I began exploring the following approach to broadcast over combinations of sparse matrices and scalars: construct a closure incorporating the scalar arguments, and then broadcast that closure over the sparse matrix arguments.

I now have inference-friendly code that constructs that closure and collects the sparse matrix arguments. But I’ve hit an impasse: either broadcast’s promotion mechanism defies inference when the broadcast operation is a closure (true for both the existing mechanism, i.e. _default_eltype-first-generator-zip, and the new mechanism, i.e. _promote_op(f, typestuple(args...)) from #19421), or (hopefully!) I’m missing something (e.g. a manifestation of #15276?).

To illustrate,

julia> foo() = let a = 1.0; Base.Broadcast.broadcast_c(x -> x + a, Array, [1.0, 2.0, 3.0]); end
foo (generic function with 1 method)

julia> foo()'
1×3 Array{Float64,2}:
 2.0  3.0  4.0

julia> @inferred foo()
ERROR: return type Array{Float64,1} does not match inferred return type Any
 in error(::String) at ./error.jl:21
 in error(::String) at /Users/sacha/pkg/julia/usr/lib/julia/sys.dylib:?

The above seems to reduce to (for the new broadcast promotion mechanism)

julia> foo() = let a = 1.0; Base._promote_op(x -> x + a, Tuple{Float64}); end
foo (generic function with 1 method)

julia> foo()
Float64

julia> @inferred foo()
ERROR: return type Type{Float64} does not match inferred return type Any
 in error(::String) at ./error.jl:21
 in error(::String) at /Users/sacha/pkg/julia/usr/lib/julia/sys.dylib:?

julia> @code_warntype foo()
Variables:
  #self#::#foo
  #1::##1#2{Float64}
  a::Float64

Body:
  begin
      a::Float64 = 1.0 # line 1:
      SSAValue(0) = Base._promote_op
      #1::##1#2{Float64} = $(Expr(:new, ##1#2{Float64}, :(a)))
      SSAValue(1) = #1::##1#2{Float64}
      SSAValue(2) = Tuple{Float64}
      $(Expr(:inbounds, false))
      # meta: location promotion.jl _promote_op 226
      # meta: location inference.jl return_type 4068
      # meta: location inference.jl Type 20
      SSAValue(4) = (Core.getfield)($(QuoteNode(Core.Box(Core.Inference.#call#110))),:contents)::Any
      # meta: location inference.jl inlining_enabled 1545
      SSAValue(3) = (Core.getfield)((Core.Inference.pointerref)((Core.Inference.cglobal)(:jl_options,Core.Inference.JLOptions)::Ptr{Core.Inference.JLOptions},1,1)::Core.Inference.JLOptions,:can_inline)::Int8
      # meta: pop location
      # meta: pop location
      # meta: pop location
      # meta: pop location
      $(Expr(:inbounds, :pop))
      return ((Core.getfield)(Core.Inference,:return_type)::Core.Inference.#return_type)(SSAValue(1),SSAValue(2),(SSAValue(4))(((Core.Inference.box)(Int64,(Core.Inference.sext_int)(Int64,SSAValue(3)))::Int64 === 1)::Bool,15,4,16,4,Core.Inference.InferenceParams)::Any)::Any
  end::Any

Note that _promote_op just wraps Core.Inference.return_type:

_promote_op(f::ANY, t::ANY) = Core.Inference.return_type(f, t)

Am I missing something? Can we fix or work around this issue somehow? (If we can’t or this approach is fundamentally misguided, I have a plan for a generic broadcast method that should provide the desired functionality. But that approach would be more code and possibly less efficient.)

Thoughts? Thanks and best!

Relevant to Status of .+ fusion? - #7 by stevengj

Original discussion more generic broadcast over a single sparse matrix by Sacha0 · Pull Request #19239 · JuliaLang/julia · GitHub

@sacha That sounds like #15276 to me. Unfortunately, I don’t think we can circumvent this that easily for the time being, the only workaround I know to something like this is

foo() = let
    a = 1.0
    global f(x) = x + a
    broadcast(f, [1.0, 2.0, 3.0])
end

Passing a as an argument to the anonymous function also allows inference to work. But I guess you can’t do that in your application?

Unfortunately correct. Thanks!

This is not #15276. the return_type special case does not handle closures.

Thanks @yuyichao! Is that limitation fundamental or something that should improve?

At my first glance it shouldn’t be fundamental.

1 Like

Status: https://github.com/JuliaLang/julia/pull/19633#issuecomment-268158362

@yuyichao, regarding the potential Core.Inference.return_type and closures improvement, how much effort would that change require / on what timescale might that change occur? In other words, would putting substantial effort into working around the existing limitation be worthwhile, or might that limitation lift in e.g. the next few months? Thanks!

Ref. Core.Inference.return_type and closures · Issue #19641 · JuliaLang/julia · GitHub