RFC: Reverse Communication (ask/tell) adapter for function-style optimizers in Julia?

Hi all,

I’d like to start a discussion on a generic Reverse Communication Interface (RCI), i.e. an ask/tell-style session API, for Julia’s optimization ecosystem. The idea is to sit on top of existing “function-style” optimizers (where the solver owns the loop and calls f(x) internally).

Motivation

A lot of optimizers in Julia (including many wrappers) follow the usual pattern:

  • you provide f(x) (and maybe constraints g(x)),
  • the solver runs the loop and repeatedly calls f(x).

That’s perfect for in-process optimization. But it gets awkward when objective evaluations are expensive or externally managed, e.g.:

  • HPC job queues / distributed simulation pipelines
  • lab experiments or licensed simulators
  • strict evaluation budgets, retries, or failure handling

In these cases, it’s often much easier if the solver can be driven via something like:

  • x = ask!(session)
  • evaluate externally (possibly async)
  • tell!(session, x, f, g...)

This “reverse communication” pattern shows up in a lot of numerical libraries.

Prior art

Nevergrad (Python) has a “recaster” subsystem that wraps function-style third-party optimizers into ask/tell by running the solver in the background and proxying objective calls through a message channel (recastlib.py / recaster.py). Not suggesting we copy it, but it’s a concrete example of the pattern.

Proposal (discussion-level)

A small, generic adapter layer that can “recast” function-style solvers into an RCI session:

  • minimal API: ask!, tell!, done, result
  • a clear lifecycle: cancellation / shutdown to avoid deadlocks
  • could live in Optimization.jl, or as an experimental add-on first (e.g. OptimizationRCI.jl)

Limitations / expectations

  • This won’t magically make sequential solvers batch-parallel. Many will still request one evaluation at a time.
  • The main value is externalizing evaluation and plumbing (budgeting, caching, job scheduling, fault tolerance), not solver speedups.
  • Backend coverage would likely be incremental; some wrappers may be tricky due to threading/FFI constraints.

Would love to hear thoughts, references, or “this is a bad idea because …” feedback.

1 Like

My suggestion if you’re interested in this is just to prototype it!

Write it three different ways. Try it out. It’s a lot easier to have a discussion about concrete implementation than an abstract idea.