Multithreading and fetch() type stability

I was poking around at the new @spawn macro and multithreading in general.
I did a @code_warntype on a function were i’d set up some task with t = @spawn fun(args).
The signature of fun is (n::Int)::Int in the definition but in the line where i fetch it back with fetch(t) i see a Main.fetch(t)::Any

Doesn’t that mean the the whole function is not treated as type stable by the compiler and hence not optimised?
I would have expected the task return type to be inherited by the return type of fun(args) why is that not the case? Is it possible for the fetch function to actually return something else?

I tried (naively) to assert the type of fetch(t) back to Integers but the problem remains, the code has to be put inside a box.

1 Like

The Task type is not parametrized at all

julia> typeof(Threads.@spawn nothing)

so there is no way to infer exactly what fetch returns. I suppose that return type of the expression you passed to spawn has to be encoded in Task for fetch to be inferrable. However, as Julia interface generally tries to avoid returned value (a task here) to rely on the inference detail, my guess is that it’s unlikely that this would be “fixed.” Besides, given that it is rather expensive to launch a task, I wonder if there is a benefit of making fetch inferrable.


As overhead times get lower I wonder if actually having inferability impacts more. I don’t know enough about how the compiler manages type instability to come up with a real example where this could be a problem.

I find it strange that the Task type, present in julia since version 0.1 does not have any kind of type parameterization. Is there a specific theoretical reason that would make it unfeasible to work with? Or just design choices?
My doubt arises from the fact that when I set out a task I already know what the task is going to give back. And this could be problematic.

Just giving an example, without type assertion the multi threaded Fibonacci function, returns ::Any on fetch and then Any is passed down the rest of the operations. Now being that implementation recursive it has some kind of natural barrier. But if I had more complex operations in a function the ::Any type would be carried along at every operation generating even more type instability.
I’m certainly playing devil’s advocate here, but in this scenario I think it could be quite more impacting.

I’m interested in this, too. My guess is that, since you have to put all the tasks in a single queue (which presumably wraps something like a Vector), the task scheduler has to forget about the type at some level. Also, if you just wait(task) on a task it’s not desirable to pay the dynamic dispatch cost for each typeof(task). So, it is desirable to pay the cost only when necessary; e.g., when calling f in f(fetch(task)).

When waiting for a task to finish i don’t think you actually need the task subtype, If you don’t want to pay the dynamic dispatch price probably you can prevent it with @nospecialize but i’m not really sure how that works yet, i might be saying something stupid. The only place where it would be useful is when fetching the result of the task, since you are actually working with something.

I tried to look for the task definition in the source codes. The fetch function simply waits and returns the inner value of the task (which is initialized by a function). What i don’t understand is why the compiler can’t infer the type of the t.result value and push it through the fetch.

I couldn’t find the definition of the Task type in the source. The docs don’t point to it.

I was told a few times on this forum that as long we have function boundaries in the code these kinds of type instabilities are not necessarily a bottleneck.

I’m not sure if that applies here but even in the docs, we are cautioned to not treat every “type instability” as a performance bottleneck. For example, Base.getindex is type-unstable and so is using Distributions; sum(rand(Uniform(), 10)) but these are typically not flagged as performance bottlenecks to worry about.

Yeah, makes sense, I fully realise that the performance loss here is negligible compared with spawning a new task on a new thread, and that i am just nit picking.
It just spiked my curiosity. For example, i can see why Base.getindex is a type-ustable operation, but i can’t really see why fetch is too, and i hoped to acquire some insights on it, that’s all :tada:

For one-time operations like fetch I think I agree. But since Task is responsible for task local storage like logger, hypothetical parametrized task type can actually be beneficial for, e.g., compiling away @debug (“zero-cost” debug logging). But then this can also be dealt with function barriers.

I’m not really sure at this point how a parametrized task would be defined. From what i could infer from looking at the source, the ::Any of fetch comes from the inner function task_result.
which is just an instance of Base.getproperty which is not type-stable.
The good thing is that if you run

for p in propertynames(t)
  println(p, ' ', typeof(Base.getproperty(t, p)))

the type of the stored values is infered correctly, the only problem is that at compile time the compiler get stuck on getproperty and can’t infer further. I don’t know enough about julia to come up with a workaround to be honest.

Also, i found out that setting up a Task is actually a ccal and that the “tasked” function is passed with @nospecialise. Is this just a standard behaviour to pass arguments to ccals to prevent unnecessary code generation or does it have more behind it?

If by “infered correctly” you mean that actual types are printed, it’s not what inference means. In Julia, a runtime object always has a concrete type.

Maybe you can try something like this

thunk = () -> begin
    ... the actual code here ...

T = Base._return_type(thunk, Tuple{})

ans = fetch(@spawn thunk()) :: T

This doesn’t do anything to the type instability of fetch but at least the code accessing ans may get statically dispatched.

If by “infered correctly” you mean that actual types are printed, it’s not what inference means. In Julia, a runtime object always has a concrete type.

Good correction, my bad. Then most of my observations don’t hold, probably, since i didn’t know that.

This workaround, while making sense, feels much heavier compilation wise.

Edit: i got curious and run a test on the fib function:

function fib(n::Int64)
  if n > 25
    t = @spawn fib(n-2)
    return fib(n-1) + fetch(t)
    return serial_fib(n)

function fib_(n::Int64)
  if n > 25
    return fib_(n-1) + fetch(@spawn fib_(n-2))
    return serial_fib(n)

the second version does not seem to gain any speed (both return the correct result):

@btime fib(46)    # 2.802 s (301411 allocations: 20.78 MiB)
@btime fib_(46)  # 20.232 s (343674 allocations: 21.42 MiB)

turns out that fetch(@spawn f()) does not really work.

Moreover, i find this interesting:

julia> @code_warntype fib(1)
... etc

2 ─ %6  = Main.:(var"#18#19")::Core.Compiler.Const(var"#18#19", false)
│   %7  = Core.typeof(n)::Core.Compiler.Const(Int64, false)
│   %8  = Core.apply_type(%6, %7)::Core.Compiler.Const(var"#18#19"{Int64}, false)
│         (#18 = %new(%8, n))
│   %10 = #18::var"#18#19"{Int64}

... yadda yadda

couldn’t %10 be the return type of the fib(n-2) before it gets wrapped in the task t by @spawn. If that’s the case then it is even more strange that

%19 = Main.fib(%18)::Any

the whole function is seen as returning ::Any

At this point, probably the type instability of fetch could be felt more in recursive functions, due to the repeated calls to a function that the compiler sees as returning ::Any, and hence has to check each time.

P.S: Sorry if i’m being so pedantic :stuck_out_tongue:

That’s because fetch(@spawn ...)) is just running in serial.