Comments on "Transducers & Effects – Mike Innes"

In a slack channel, Ari Katz notified me about @MikeInnes’s blog post Transducers & Effects – Mike Innes.

It’s nice that there are more attentions to transducers! So, I wanted to write some comments somewhere but I didn’t want to get it lost in the unrecoverable slack history. Discourse seems to be a good place for this, although the post itself does not mention Julia at all :slight_smile:

Anyway, first, let me comment on the limitations of transducers as mentioned in the post. I think there are a few misconceptions (some common ones, I think) partially due to mixing the transducer as an idea and transducer as implemented in Clojure:

  • For example, there is nothing stopping you from implementing stateful transducers using pure functions. That’s done in C++ and that’s how I implemented in Julia.

  • I think the double-wrap problem is just a problem in Clojure implementation. IIRC, Rich Hickey commented somewhere that he wished he never have done that. (So I avoided this in my implementation.)

  • Implementing transducer-compatible foldl is not hard. It’s actually much easier than implementing iterate. With a bit of syntax sugar, it’s as easy as Python’s yield. See:

One thing I wish that was mentioned as a limitation of foldl approach (≈ “transducer approach”) is that it does not compose “horizontally” (the terminology I just made up) in the sense you can’t write zip with it. OTOH, it’s very easy to write zip given iterate. (Generally, some hard things in foldl are easy in iterate and vice versa.)

It’d be very interesting to see how zip would be handled by the effect handler framework.

In general, I’m skeptical if you can do something essentially different from iterate and foldl in this space. They are pretty minimal and they are the dual of each other (see Catamorphism - Wikipedia). To emphasize this, let me comment on the first sentence (emphasis by me):

Clojure has introduced a very interesting idea called ‘transducers’, which decouple sequence transformations (mapping, filtering etc) from sequence implementations like vectors, lazy lists and channels.

I actually wouldn’t say “decoupling sequence transformations” as the defining property of transducers. The transducers transform reducing step functions. It’s also possible to define what I call iterator transforms that transform iterators (which is just a curried form of what we have in Iterators; e.g., itr -> Iterators.filter(p, itr) is an iterator transform). So, data-independent composition of the transformations can happen at “iterator side.”

My impression is that the effect handler is essentially a syntax sugar for defining iterate (or in some cases an iterator transform). I do agree this direction is very powerful, though. This is because it’s just so hard to create state machine by hand and it does not perform well (at least with the current julia/LLVM compiler). That’s why I’m exploring something similar in GeneratorsX.jl (based on Mike’s IRTools.jl!).


Nice! GeneratorsX indeed looks pretty similar to what I had in mind. The main motivation for into is that if you know what data structure you’re producing, you can avoid the state machine in some cases (eg building an array via collect). I see that GeneratorX does that by producing both iterate (state machine) and foldl (loop). This is quite a different way of thinking about it, but the core idea is the same: use the same syntax to define a sequence which can either eagerly (and efficiently) produce values upfront, or lazily produce them on demand.

The key difference is whether or not this decision is driven syntactically (choice of foldl vs iterate in code) vs by dispatch (doing the right thing for the data structure). I think the dispatch version would be totally doable in a GeneratorsX-like setting. (Some of the syntax in the blog relies on language-level effects, but I think you can mostly get by without that.)

Here’s how I’d write zip:

fn zip(xs, ys) {
  into empty(xs) {
    while true {
      x = next(xs)
      y = next(ys)
      yield((x, y))

(next is an abstract stand-in for the iteration protocol; of course you want to check for nothing and break out of the loop etc. This is how for loops would expand too.) One thing I didn’t mention in the post was that in the case of multiple inputs you probably want a promotion system of some kind (e.g. xs and ys are both arrays, make an array; ys is a channel, produce an iterator or channel, etc.)


It’s nice to know we arrived at similar solutions from very different angles :tada:

The corresponding API I use in BangBang.jl (one of the spin-off/dependency libraries of Transducers.jl) is append!!(dest, src) -> dest′. It is driven by the foldl of src by default. But it’s possible for dest to take the control and drive src iteration via iterate.

I’d even say Julia is partially doing this already as of Transducer as an optimization: map, filter and flatten by tkf · Pull Request #33526 · JuliaLang/julia. Here, foldl dispatches on the lazy transformed iterator types and apply the corresponding transducers to the reducing step function. OK, this still works only with functions calling foldl in their call chain. However, it should be possible to lower for x in xs to foldl as I demonstrated in FLoops.jl so that the native for loop syntax is lowered to appropriate code via the dispatch mechanism.

What I wasn’t sure was that there might be some magical way to directly do it with handle. But the effect handler seems to need a corresponding scope and using next(xs) looks like a clean solution to me.


If I understand this correctly (and I’m new to transducers / streaming libraries), I think this is solved by streamly? It seems to strike a nice balance of high-performance / operator fusion, while also permitting zips, concurrency, and reactive programming. I haven’t used it myself but it seems to be well thought-of in the Haskell community. For example, here’s some code that shows demonstrates zipping and parallelism within a stream:

import Streamly
import qualified Streamly.Prelude as S

main = do
    s <- S.sum $ asyncly $ do
        -- Each square is performed concurrently, (<>) is concurrent
        x2 <- foldMap (\x -> return $ x * x) [1..100]
        y2 <- foldMap (\y -> return $ y * y) [1..100]
        -- Each addition is performed concurrently, monadic bind is concurrent
        return $ sqrt (x2 + y2)
    print s

I’m not sure how much of its optimizations are GHC-specific, but perhaps there are lessons that are applicable to Julia too

Streamly employs two different stream representations (CPS and direct style) and interconverts between the two to get the best of both worlds on different operations. It uses both foldr/build (for CPS style) and stream fusion (for direct style) techniques to fuse operations.


Is extremely interesting. Seems like a very general and composable set of abstractions for writing safe and predictably parallelizable programs with all kinds of control flow, without worrying about contorting algorithms into maps, broadcasts, scans etc

This is the “just write a loop” but for parallel hardware and AD.

User defined effects are first class so can write a safe parallel RNG effect and compose that with other ones in loops.

1 Like