Is there a defined minimal interface for a type to work with SparseArrays?

Hi!

I’ve been trying to use Symbolics + SparseArrays, but I get errors because in SparseArrays to check for zeros it is used != 0 and that doesn’t work with Num.

I learned about 30573 in which they discuss about this exact issue (the use of sparse arrays on algebraic data). They talk about the use of the minimal interface of iszero and zero to be able to use the library but there doesn’t seem to be a final word about it.

I need this interface as only then I could make a PR to Julia to change the respective places for iszero and zero and make my project work.

Any thoughts?

1 Like

This should be fixed in #30580. What is your julia version?

My Julia version is 1.6.3. But even testing my code with 1.7 (or even master) it does not work. It seems that it is accepted the interface of zero and iszero. But in some other places != zero and == zero is used instead of !iszero and iszero.

Which in this case makes SparseArrays to not work with Num given that (if a is a symbolic variable) a != zero(a) returns an Expression instead of a Bool.

MWE (Julia 1.6.3, latest Symbolics)

using Symbolics
using SparseArrays

@variables a

v = sparsevec([1], [v])

v+v # here throws an error

Right… there are some missing cases in sparsevector, I can try to address that as soon as I have some free time (feel free to submit a PR yourself though, should be easy).
Note that it does work with sparse matrices.
On a more philosophical note though, it is not so clear why iszero should help here. The current problem is that the code assumes that a != 0 is true or false, which is not the case for Symbolics (it is an expression). One could argue that also iszero(a) should return an expression itself also and maybe someone will introduce that change in the future. Note that this is not pure speculation, something similar happened with missing: at some point it was decided that iszero(missing) should return missing, but the sparse array code is not prepared for this ternary logic. Honestly it is not very clear to me how to proceed…

This is very similar to an issue that came up in https://github.com/JuliaIntervals/IntervalArithmetic.jl/issues/165 — algorithms which use if-statements simply cannot handle more general representations at the moment.

In the original issue there are some ideas about how to overcome these issues. The one that I think is the best is to define a nullalble and isnullable (or another name) that defines what elements are in the empty space of sparse vectors and arrays and to fallback it as zero and iszero.

This allow to extend the types that are supported by defining the logic of what a empty slot in the sparse representation means without affecting the already working types.

Some people talked about working with Strings (or arbitrary types) in vectors of sparse representation. Maybe the best idea is to not define nullable and isnullable for those types, but accept type piracy and document this in the manual.

What I don’t like of this solution is that one needs to make any new type aware of sparse matrices in a sense (by defining these functions) or else there will be some type-piracy involved.
Honestly, I think (but I’m not sure) that the best solution could be to allow the background element to be passed at construction. This would make the code much more generic, and it would be possible again to work with sparse matrices of matrices and other types for which there’s no sensible way of defining zero. Of course, the code could be fast-tracked to the normal one when the background element iszero for standard numerical types for performance. The additional computation cost on standard types could be just an if statement which should be negligible.

Passing the underlining value seems like a better option , but I don’t see ho it would work with Symbolics, given that they overwrite ==.

They overwrite isequal as to give the “correct” answer to isequal(a, zero(t)) (given that a is a symbolic variable). Also for the generic case isequal calls ==, so there is no need to worry about defining isequal.

Still there is the problem that == and isequal does not follow the same semantics for floating point (isequal does not follow IEEE 754 standar), but maybe that is not a problem given that we just care about equality in a strong sense (I guess) against a single value (I’m thinking here about matrices of floating point numbers and alike).

Agreed, it would be impossible to do without a breaking change. Another drawback would be that for some types iszero(x) could be more efficient than isequal(x,x0). Still… I would be very much in favor of doing this…

I don’t think it would be a great breaking change given that from a semantic point NaN != 0.0 now, so it would not change much the semantics.

Your second point is something that is more worrisome. Take the idea of a M::Matrix, zero(M) returns a zeroed matrix of the same size. It is a lot more costly to do isequal(M, zero(M)) than iszero(M).

1
An even more general approach is in

https://github.com/JuliaLang/julia/issues/41036

I need to use sparse arrays with an unstored element that is not 0. There is a chance to clean this up conceptually and make SparseArrays more flexible. It is based on making the following functions:

neutral(x) = zero(x)
isneutral(x) = iszero(x)
neutrals(::Type{T}, args...) where T = zeros(T, args...)

Then only neutral, etc. would be called in SparseArrays, never zero. These functions could be defined in SparseArrays and methods added in other packages for types when necessary. If neutral were more broadly useful, it could be moved elsewhere.

2
The fact that Symbolics returns an expression where everyone else returns a Bool is a problem with the Symbolics model of intimately mixing a symbolic language with a numerical one. It comes up in other places. I never took time to think about it enough to make an issue somewhere. But, it leads to hacks like this:

https://github.com/jlapeyre/QuantumOps.jl/blob/5f511c2da09f05cc6ffe7e972d2e8a1060740e69/src/QuantumOps.jl#L65-L70

Even if symbolic expressions are a minor use case in a library, we have to make obtrusive hacks to support it. There may be a satisfactory solution— I have not yet looked for one.

Yeah, that is one of the main causes of this problem. But it is also really useful to have it. :v

I like this solution, I think is the most general one and could be enforced as an interface.

1 Like

NaN != 0.0 was never a problem I think (NaN == 0.0, iszero(NaN) and isequal(NaN,0.0) are all false). The problem is e.g. that isequal(-0.0, 0.0) is false but iszero(-0.0) is true. So this would be a breaking change.

For the second part, note that we would not do isequal(M, zero(M)), we would do isequal(M, A.b) where A.b would be the stored background value. Still, I agree that iszero could be more efficient.

1 Like

You’re right, that is a problem.

This is the same or very similar solution as @suavesito 's nullable - isnullable solution above, right? I.e. to define a parallel zero - iszero system, almost always identical to zero - iszero except for some corner cases. I don’t like it so much, for the reasons explained above; i.e. that this system needs to be maintained. The creator of any new type will need to be aware of the SparseArrays library and define these functions. Otherwise, an user of the new type would have to resort to type piracy (add a method to a SparseArrays function for a type defined in NewType) in order to define it…

But then, what is the real difference in semantics between isneutral and iszero?

Note that Symbolics seems not to be really a problem right now (modulo some overdue corrections to sparsevector.jl) because we don’t (shouldn’t) use == 0 in the sparse arrays code, we use iszero and iszero(a::Num)::Bool. But we do have a problem with missings, because iszero(missing) is missing.

I don’t have details of the nullable - isnullable solution, so I can’t really say. I’m not sure if they are addressing exactly the same problem. I say this because I don’t understand how the questions of type piracy and maintenance burden arise.

Regarding the isneutral proposal, I see only advantages and no disadvantages:

  1. No type piracy
  2. No burden on anyone to think about or maintain anything if they don’t want their type in use SparseArrays.
  3. An opportunity for some types to be supported by SparseArrays where before there was no chance.
  4. No efficiency penalty.
  5. Little increase in complexity. Arguable a decrease in complexity because it adds clarity to the semantics of the special element.

Regarding (5): currently, one finds zero, iszero, 0, etc. in the SparseArrays code. Some of these appearances are referring to the neutral element, some are not. Those that do refer to the neutral element are not used uniformly (eg sometimes 0, sometimes zero). With my proposal, only and all references to the neutral element use isneutral and neutral.

Note that I am glossing over the difference between == and isequal. How much of a breaking change this would be is not clear to me. I’d argue if at all possible (with regards to breakage), the semantics should be cleaned up.

My understanding of iszero is that it asks if a value is zero in an algebraic sense. Is it the additive identity (in a group) of a type. Or more broadly plays the role of zero in a field (i.e multiplying by zero sends all elements to zero). Or is the zero vector. By contrast, isneutral asks only if the value is the non-stored value in a sparse array, and nothing about its algebraic properties. For much of the linear algebra code both algebraic properties and non-stored value properties are assumed. So only part of the the SparseArrays interface would be supported by isneutral in general, the part that does not rely on the algebraic properties of zero. Using isneutral would make it clear (and require that the coder is clear) on when only the non-stored property is important.

In the issue that I linked above is a link to a repo where I have copied the sparse arrays code and implemented the isneutral approach.
https://github.com/jlapeyre/SparseArraysN.jl

The sparse array code should use only zero and iszero (this is true for SparseMatrixCSC, for some reason not true for SparseVector but easily fixable – all uses of == 0 should be replaced by iszero).

The isneutral method has to be defined with the type, so this must be some special element. Then why not the additive 0 (if addition is not defined for your type, then defining iszero for your type should not be a problem)?

1 and 2) What I meant with type piracy is that it forces the creator of a type (that has in principle nothing to do with sparse stuff) to define a method explicitely designed to work with sparse arrays. If they doesn’t do it, then a third party that wants to use it with sparse arrays needs to do it through type piracy… It’s bad interoperability in my opinion to have this crosstalk between libraries.

  1. which ones? I can only think of Union{T,Missing} ATM. My take it that it is a bug in iszero(missing)

  2. ok

  3. I don’t think it is much more clear (actually adds another not well defined concept).

Honestly I don’t see what neutral - isneutral solves better than fixing the remaining iszeros in the SparseArrays library.

On the other hand, storing the element would really enlarge the range of types (and my guess is that the penalty would be negligible for numerical types).

I still don’t understand. If I author a package defining a type, why am I obliged to write methods from the outset to ensure that it works with various other packages?

I don’t see how this is different from the generic situation in the Julia ecosystem: I write package A with TypeA. Then the author of package B wants to use TypeA with another package C. But, package A lacks some necessary method definitions to be used with C. So all parties have to work together to find a solution (who “owns” a function? do we use “glue” packages?, etc.). It is a well-known problem that will probably become more difficult as Julia grows. But, it doesn’t seem to me to be unique to the problem at hand. I suspect there is something particular about the missing case that I don’t understand.

I argue that it separates a confusion between two concepts and adds a precisely defined concept. In the sparse code, zero and 0 are sometimes used to refer to algebraic properties and sometimes to the non-stored value, or both. If we use neutral then; always and only when you see neutral in sparse code, it is referring to the non-stored value. When you see zero or 0 it is referring to an algebraic property and not to the non-stored value (probably exclusively, I don’t recall exactly). You can see this by actually doing it, as I did. Go through all the sparse code implementing the isneutral scheme and you find that sometimes you have to replace zero by neutral and sometimes not. As a byproduct, you have separated the semantic concepts.

I think this is clear to me and fuzzy to others because I have a concrete example in mind. I have elementary operators AbstractOp that can be tensor-producted together to produce compound operators. Often, almost all of the factors in the product are the multiplicative identity (not zero, the additive identity). I could write my own sparse storage code. But, this would involve unnecessary duplicating, debugging, and optimizing.

Here is the implementation for AbstractOp

The other half of this implementation is rewriting SparseArrays to support neutral. Here are several (not all) of the replacements of zero with neutral.

One small illustrative example from the above diff is

count(f, x::SparseVector) = count(f, nonzeros(x)) + f(neutral(eltype(x)))*(length(x) - nnz(x))

This does not depend on the algebraic properties of zero.
Note that I have not renamed any functions defined in SparseArrays, so nonzeros really means nonneutrals. But, what does the function nonzeros do ? Is it referring to algebraic or storage properties or both? Reading the docstring, you see it refers to “structural nonzero” values. So nonneutrals would be a more precise and less confusing name. I’m not suggesting changing these function names as that would be a breaking (and very disruptive) change. However, it would be a better design choice, should the question arise in the future.

Note that third parties are not required to use neutral. They should, but don’t have to. It’s akin to using for i in 1:length(a) rather than for i in eachindex(a). If you use zero and iszero in your third party code, then it will still work for floats or whatever is already supported by SparseArrays.

How easy is it for sparse array libraries in other languages to support different non-stored values? It’s easy in Julia and is a great opportunity to showcase the strengths of the language.

I understand that the change is not hard, I actually went through more or less the same changes replacing the == 0 by iszero in #30580 (some changes still to be done in SparseVector apparently, the reason I think is that the PR went a bit stale before getting merged and I was not aroud).
Sorry because I did not check SparseArraysN, does matrix addition and multiplication work correctly if the type has both + and * defined?
BTW, I would love to have an really arbitrary nonstored value (that one could change from instance to instance), just not one arbitrarily defined with the type.

No. So there are a lot of linear algebra functions that will give the wrong answer without new methods. In my example of AbstractOp above, I had to define a multiplication function. So why bother? There is a lot of functionality that does not depend on the algebraic properties of the non-stored element, and you would have to reproduce this if you start from scratch. I suppose if you wanted to do this really cleanly, you would have to separate the code in SparseArrays into the parts that do and do not depend on the algebraic properties of zero.

Yes I support this kind of improvement. There are a few important related things. For example moving SparseArrays to a stand alone package. In my repo above I note some of problems (there are issues in the main Julia repo about this as well). Namely, SparseArrays defines methods one Base only types, i.e. commits piracy. These need to be moved elsewhere. It would be much easier to experiment with these design ideas if SparseArrays could be cleanly separated.

Exactly. This is perhaps an argument against accepting an isneutral PR. I only thought about it a little. It seems much more complicated to implement this. Especially in an efficient way.