What is the future of `sin(::Vector)` when there is `sin.(::Vector)`?

I’m not sure this is always good advice. Lots of things in Base are messy because people wanted to make generic code work for scalars and vectors. It seems like it may be too late to fix that decision, but I’m not totally sure we should advocate a design pattern that depends on functions on vectors happening to also work on numbers.

I think the behaviour that sin.(x::Number) == sin(x) should not be seen as the result of numbers acting like vectors, but rather that broadcast(f,x::Number) should return a Number.

This behaviour of broadcast(f,::Number)::Number would actually allow the removal of numbers-as-vectors without any obvious drawback.

More generally, this is the behaviour I would expect from broadcast:

broadcast(f,::T)  # returns a scalar of type typeof(f(::T))
broadcast{T}(f,::AbstractVector{T})  # returns AbstractVector{typeof(f(::T))}
broadcast{T}(f,::AbstractMatrix{T})  # returns an AbstractMatrix{typeof(f(::T))}
broadcast{T}(f,::MyAbstractContainerType{T}) # should be overloaded to return a MyAbstractContainerType{typeof(f(::T))}

So the return result has the same “shape” as the second argument, where the definition of “shape” is dictated by the type. E.g., if I had a ragged array type, it should return a ragged array of the same shape. The shape of a number is a “scalar” so the current behaviour is natural to me.

@johnmyleswhite This does get to the heart of the matter. Julia should have a predilection either for considering organized multiplicities as distinguished from their constituent 1-pliciities (elements) or for considering a whole as a complete part and vice-versa. It would be better for Julia’s ease of adoption (unsharpening one more edge) and allow us to advance a coherent cognitive mereotopological model [wholes, parts, melds, adjacencies, encompassing|embedable, superveniences – that’s what it means to me]. This seems a good place not to take both the forks.

Another reason for this is that, in 0.6, operators like .+ are planned to be broadcast calls. By longstanding convention, such “dot operators” behave like the ordinary scalar operators when acting on scalars.

4 Likes

A question to ask is why are you trying to write a method that works for both scalars and arrays in the first place? With the new f.(args...) syntax, it’s easy (and more efficient) to take a scalar function and vectorize it “at the last moment” – instead of the old style of trying to let the vectorization flow through the code. Now that we’re moving to actively deprecate the vectorized forms of standard library functions that are just straightforward factorizations of their core scalar versions, the only cases I can think of where you would want to let both scalars and arrays “flow through” the same method is something like a polynomial evaluation where the same definition is equally sensible for scalars or matrices. Something like this:

julia> p(x) = x^2 - 2x + I
p (generic function with 1 method)

julia> p(3)
4

julia> A = rand(3,3)
3×3 Array{Float64,2}:
 0.442283  0.839404  0.917475
 0.377142  0.815185  0.640569
 0.320895  0.487319  0.730314

julia> p(A)
3×3 Array{Float64,2}:
  0.922036   -0.176181   -0.221426
 -0.074485    0.662893    0.0548796
 -0.0817207   0.0478732   0.679305

But in the analagous case for the sin function, you’d want to call sinm rather than vectorized sin anyway.

3 Likes

Why are you trying to write a method that works for both scalars and arrays in the first place?

Maybe because the most important distinguishing quality is not ‘collectivity’, probably because ‘collectivity’ is not an organizing principle of the type design in play. For example, Gestalt as an implemented type: here, a whole is more than the sum of its parts and an unsummed part is less than whole … yet still, as a perceptual aspect, a gestalt.

All I’m saying is that writing functions that could operate on scalars or elementwise on arrays of them was always a symptom of language limitations, not an inherent need. The actual need was always to be able to efficiently vectorize scalar functions, which we can now do directly and more efficiently. So why hang on to what was always a symptom of a language limitation and not an inherently desirable feature in the first place?

2 Likes

Hopefully the secret plan is to deprecate sinm and use sin(::Matrix) in its place…

2 Likes

I would hope the same for the matrix exponential function?

2 Likes

I think Jeff has indicated his desire to do both of those in the past.

2 Likes

That could be a longer term plan, but it’s pretty dangerous to deviate silently from what people expect from Matlab, Python, etc. I’m not sure it’s that necessary since writing code for scalars or matrices that uses exp or sin is pretty rare – and you can always just use expm and sinm in those cases. Of course, getting rid of a whole parallel universe of ???m functions would be rather nice.

3 Likes

My concern would be the difference between sin.(matrix) and sin(matrix) — miss the dot, and you have a bug which will be pretty hard to pin down, as both return matrices of the same size.

I can see the allure of mathematical elegance though.

@Tamas_Papp: Since dot vectorization is a syntax-level feature, you could imagine this being indicated via syntax highlighting.

Ah, that’s why fft(::Matrix) behaves the same as in MATLAB :stuck_out_tongue_winking_eye:

1 Like