Modeling scale-up and economic factors in DYAD (bioprocess / fermentation)

Hi all,

I’m new to DYAD, so apologies if this is a basic question. I’m exploring whether DYAD is a good fit for modeling for my purposes, particularly modelling and analyzing the feasibility of fermentation systems.

My background is in biological fermentation, where we model reactors (batch/fed-batch/continuous) with mass and energy balances, along with kinetics (e.g., Monod-type growth). From what I can tell part seems reasonably compatible with DYAD’s existing physical types and component libraries.

What I’m less clear about is how to handle scale-up and economic considerations within this framework.

In practice, we often care about things like:

  • Scaling reactor volume and feed strategies from lab to industrial scale
  • Utility usage (e.g., cooling, agitation, aeration)
  • Cost contributions from raw materials, energy inputs, and downstream processing
  • Overall process efficiency and cost per unit product

So my questions are:

  1. Is DYAD typically used only for physical/process modeling, or is it reasonable to incorporate economic calculations (e.g., costs tied to material and energy flows) within the same model?
  2. If so, what’s the recommended approach—would you define additional variables (e.g., cost flow rates) alongside physical quantities, or keep economics in a separate layer?
  3. Are there established patterns for handling scale-up effects (e.g., changing transport limitations, mixing, heat transfer) within DYAD models and can I run it?
  4. Any examples or best practices for combining process simulation with other metadata or parameters (example economic data to be used in an economic feasibility analysis)?
  5. Would I need to design a new component library for this?
    From what I can tell, Dyad has a rich component library for modelling physics, but is it overengineering to build an industrial scale bioprocess component library on top of it and also embed an economic layer on top of it?

Thanks in advance for any insights!

I have recently been pondering similar usage patterns (either Dyad or MTK), but for a different application. I’m only just familiarizing myself with these tools, so just sharing my two cents:

  1. If the “physics” of the components you need is already defined in a library (might even be your own library!), it might help to have “wrapper” components which extend the original with additional parameters related to practical details i.e. separate layer as you propose. That way, these additional factors are less likely to interfere with whatever physics based “strictly physics-y” analyses you might have already implemented.

  2. The key question is what you want to accomplish through this modeling, and whether the MTK “analyses” support those derivations. If the things you care for are modeled by DAEs (or even simpler, just algebraic relations – like cost, or energy consumption) then I imagine it should be quite easy.

  3. Here’s how I can think of framing the problem: Use Dyad to set up the acausal system description, and an “analysis” to predict the quantities of interest (eg. per unit cost, for a certain design + combination of parameters). You can use this “simulator” as a black-box function evaluator connected to an optimizer, and optimize for the objective (eg. lowest cost of unit product, but with a cutoff on the minimum viable scale)

  4. It’s really easy to try this approach on a simple system of your choice using the Dyad agent; and that might give you a more concrete feel for what that will look like.

Do share your observations if you try it out!