Transducers, paralelism and feedback loop

After watching the JuliaCon talk on Transducers.jl, I started to wonder about multiple cases where iterators are not optimal. One of the instances that seem suited for transducers are the systems with a feedback loop.

Let’s consider an intelligent algorithm (or a person ;D) which by learning the unknown function f(x) in the evaluation would schedule optimal values on which to evaluate the function. The algorithm could also take into account unresolved (not yet evaluated) points to schedule new ones and so enabling parallelism [1].

So far, by learning the Transducers.jl I came up with a code:

using Transducers

l = Learner() # the data for the intelligent algorithm which has ask!=iterate and tell! methods.

jobsch = Channel(TakeWhile(x->npoints(l)<10 && while unresolved(l)>4 sleep(1) end),l)

resch = Channel(Map(x->(x,x^2)),jobsch)

foreach(Map(identity),resch) do input
    @show input
    tell!(l,input)
end

It is unclear to me how the evaluation from jobsch to resch happens. Does Transducers.jl create a separate loop for this transformation? If so, is it threaded? How to execute it parallel instead?

I also wonder if there is a better way to express waiting, which is now that dirty while loop.

[1]: Some time ago, I implemented the model with iterators in TaskMaster.jl

Unfortunately, telling foldl or reduce functions to “what to do next” is something impossible in the transducers. The only possible interaction is reduced which is something like the break in for-loop. It may be possible to extend transducers to implement feedback loops but nailing down a class of possible feedbacks with good compositionality sounds like a very challenging task.

If you can come up with a way to formalize your computation as associative binary operator you can use reduce to parallelize it (especially now that Transducers.jl 0.4 supports effcient early termination based on reduced). Turning a complex computation into an associative operator requires creativity. Guy Steele has many talks about this topic and I highly recommend searching them in YouTube etc.; for example, this Google Tech Talk.

Having said that, my impression is that what you are doing is hard to formulate as an associative operation because it looks like you are “adding elements” to the set to be processed. It may be possible to do what you want by violating all the purity assumptions of transducers and reducing functions (i.e., a transducer or reducing function mutates the Learner object). The Learner object can then implement these (totally undocumented internal) interfaces to control “what to do next.” But it is not only violating the purity assumption but is highly relying on the particular depth-first scheduling nature of Julia’s parallel runtime. So I’m not sure if I should even be mentioning this :slight_smile:

3 Likes