Add varargs methods of `any`, `all`, and `count`

OR and AND are bool-only versions of max and min. Likewise any and all are bool-only versions of minimum and maximum.

any and all cannot be applied to collections of collections naively. Doing so requires a specific overload, which is of course the point of this thread. In exactly the same way, the maximum of a collection of collections could be the maximum of the tuples generated by zip. If they’re all equal in the tuples[1] case, move on to tuples[2], …

I’m also concerned about the fact that Bools are currently collections, which makes this interface harder to understand.

On the contrary, I feel this way most naturally. Perhaps I love broadcasting sooooo much!

1 Like

Is there a reason LazyArrays are so slow? It seems like a consistent ~2x penalty, which I wouldn’t have thought would be necessary:

julia> @btime all(a .== b) setup=(a=[1:3;]; b=1:3)
       @btime all(==, a, b) setup=(a=[1:3;]; b=1:3)
       @btime all(@~ a .== b) setup=(a=[1:3;]; b=1:3)
  30.652 ns (2 allocations: 96 bytes)
  3.600 ns (0 allocations: 0 bytes)
  8.216 ns (0 allocations: 0 bytes)
true

julia> @btime all(a .== b) setup=(a=[1:1000;]; b=1:1000)
       @btime all(==, a, b) setup=(a=[1:1000;]; b=1:1000)
       @btime all(@~ a .== b) setup=(a=[1:1000;]; b=1:1000)
  691.111 ns (3 allocations: 4.41 KiB)
  406.736 ns (0 allocations: 0 bytes)
  710.526 ns (0 allocations: 0 bytes)
true

Hm, I hadn’t thought of it from a mathematical perspective; I’d been thinking from a linguistic perspective. It’s pretty common to say, “all of these should equal all of those” (while performing some hand-waving ritual to indicate the collections), so having to say instead “take these and those, zip them, and splat them into an equality test” feels mildly awkward.

Mathematically though, my instinct is that or and and are more similar to addition and multiplication, so if anything, any and all might be analogous to sum and prod. I’ll have to think more about the implications of this.

(max and min are instead L_\infty and L_{-\infty} norms respectively)

Can you explain what you mean by this?

Because true is iterable, you can do any(f, [false, true], false). I’m not sure what the implications of this are. One is that it could hide bugs like any(f, x, y ,z) where z is accidentally a bool instead of a collection of bools.

I enjoyed looking at these “unified algebra” essays 1, 2 that discuss related issues. They treat booleans more like the binary version of the numbers -inf/+inf than like 0/1.

Why not? That is the most natural and obvious thing to me. I also like this for clarity and performance:

julia> all(a == b for (a,b) in zip([1,2,3], 1:3))
true
1 Like

Because it screams “Slow!” so loudly it splits my ears. It’s unbearable. A lazy broadcast is possible, but would take some getting used to.

While this is technically true, they are not good as names for any and all. It also seems rather tangential to the OP’s proposal.

Better. It’s just quite verbose, and I’ve never liked the variable repetition that is necessary.

1 Like

(On-topic:)

This unified algebra has some major differences from Julia’s semantics, that I don’t see a way to resolve anytime soon. For example, his notion of a function involves tightly packaging the domain with the expression that operates on it—in some ways it’s conceptually closer to a Julia Generator than a Function, a fact which he leverages aggressively. He also expresses multiple-argument functions as currying. Both of these are difficult to fit into Julia’s multiple-dispatch paradigm, and currying is incompatible with vararg functions altogether.

That said, even though he uses \uparrow (max) and \downarrow (min) for \wedge (&) and \vee (|), I still see the OP proposal as not opposed to his algebra (aside from the currying thing, of course).

Instead of writing \sum_{x\in D}f(x) and \forall x\in D, p(x), he promotes overloading binary operators such that when taking a function as their argument they mapreduce over its domain—so we would instead write +\langle x : D\rightarrow f\,x\rangle and \downarrow\langle x : D \rightarrow p \, x\rangle. In-so-doing, by ejecting the former notations from algebra, he relegates them to informal (or at least, less-formal) uses such as common language—where we are free to do as we wish.

In other words, because instead of sum(f, D) and all(p, D) he promotes something more akin to mapreduce(f, +, D) and mapreduce(p, min, D), by his proposal the former functions become somewhat untethered from the rigors of algebraic discipline and are free to take on common-language behaviors outside of algebraists’ jurisdiction. So he should likely offer no objection to vararg methods of any and all—he shouldn’t care.

(Off-topic rant:)

Wow, I wish I had seen these essays years ago; thanks for introducing me to them! This is a refreshing take on how algebra should be formulated, especially (for me) Boolean algebra. From the start, I disliked Boolean algebra—learning it in college was a chore because it felt unintuitive and clumsy; somehow it felt like the professor must be doing something wrong. Since then I learned to accept and embrace it, but that’s likely just a symptom of learned helplessness and Stockholm syndrome.

The proposed binary algebra, using \top for \infty and truth, \bot for -\infty and falsity, and \uparrow and \downarrow for \vee and \wedge, is much more intuitive (and the operator precedences make more sense, considering their distributivity). To an analog circuit designer, it’s basic knowledge that making a binary signal out of a continuous one requires infinite gain, so it should’ve been obvious. And for anyone who’s uncomfortable with infinities, it should also cause discomfort with the idea of attainability of certain truth—a development which I think is a net positive. I also find myself attracted to the idea of obsoleting Kleene algebra, and a unified syntax for sets, unions, intersections, and the like.

I would like to learn more about this algebra. For example: How does he propose to write integrals? How shall I visualize a binary probability mass function? What all has been reimagined from this perspective in the world of probability? It’s almost a complete rewrite of mathematics from the ground up, so there’s a lot to think about. I want to see some familiar derivations (of, say, some Fourier transforms) rewritten with its syntax.

Can I entice you to start a thread imagining a future Julia that incorporates some of these ideas? Perhaps a Julia 2.0 could throw InexactErrors when attempting to convert an Int to a Bool, and have Base.isless(::Integer, b::Bool) = b ? true : false? Treating true as numerical 1 does feel like a C-ism that maybe should be a relic of a distant past when compilers weren’t as smart, same as 0-based indexing (:wink:). And reducing functions like Base.:+(g::Generator) and Base.:↑(g::Generator) are pretty easy to do.

Also: I have a newborn daughter. Do you know how to find schools that teach this algebra, so that by the time she reaches puberty she’s well-versed in Hehner’s binary algebra? She shouldn’t have to suffer through the insults of Boole.

3 Likes