Basic Molly questions

One of the issues is the results from molecular simulations depend on the interatomic interaction model as well. For solubility of gases, good chances are the results will be all over the place as well unless the models for water and gases are tailored specifically to reproduce the experimental data.
For example, see the comparison between TraPPE and OPLS force fields in Fig. 9 here.

1 Like

Just out of curiosity - how well liquid water can be currently simulated, in general, and by Molly.jl specifically? I mean, water is both very well experimentally studied, and pretty complex liquid: with hydrogen bonds, dissociation etc. And - if some reasonable (i.e. fitting ALL reliable experimental data that are available) simulation is possible, what are the computation resources necessary?

Non-polarisable, 3-point, molecular mechanics water models are cheap but struggle to match all properties. 4-point and other virtual site water models can match more properties. Here are some relevant papers:

Generally the properties matched are in the conventional temperature and pressure ranges. Water models that match for example all known ice structures are beyond my expertise, but if they exist they are expensive.

All 3-point water models are compatible with Molly. We don’t have support for virtual sites, yet, which means we don’t support 4-point water models. We also don’t support polarisable models like AMOEBA yet.

Out of curiosity, when simulating water – is it realistic to use Lennard-Jones potential?

I looked into McQuarrie and Simon’s 1999 book, and found some values for \sigma and \varepsilon. These data do not include \sigma and \varepsilon for water.

They also discuss the origins of attractive forces, with potentials for London dispersion, dipole-dipole moment and induced dipole moment. From these, it is possible to find \varepsilon\cdot \sigma^6. I have computed estimates of \varepsilon when (i) assuming that \sigma is known, and (ii) when I replace \sigma with the collision diameter.

It seems like \hat{\varepsilon} computed from London dispersion, etc. is decent for some substances, but deviates considerably from the listed values of \varepsilon for other substances.

Does it make sense to use my estimates of \sigma \approx d_\mathrm{coll} (collision diameter) and \hat{\varepsilon} for water??

Out of curiosity, when simulating water – is it realistic to use Lennard-Jones potential?

No, this wouldn’t lead to satisfying results.

For modeling the solubility of relatively common systems like water-O2, you should get good results with EOS models (or similar) using Clapeyron.jl. Using molecular simulations here would be overkill (without getting better results). How did you calculate the solubility in Clapeyron (which model and method)?

2 Likes

You might find this paper relevant, 10.1016/S0304-386X(98)00007-3

But where is water there?

I mean, in the “classical” case it’s a pretty concentrated alkaline solution, and that’s not water.

In the “modern” case: the water is in the polymer membrane, and maybe also as a surface layer. That’s also not water.

1 Like

As mentioned previously, I used the same EoS for both phases. I tried with Peng-Robinson, some empiric Helmholtz EoS:s (MultiFluid, GERG2008), as well as various SAFT models. The give good fit to experimental data for the vapor phase, but relatively poor fit for mole fraction O2, etc. in the liquid phase. A miss of a factor of 30-50 or so, I would guess.

I assume this is because these models are not “trained” in particular for the liquid phase, while a Henry constant model is. Perhaps it is possible to get better result if I use different models for the liquid and vapor phases?

There is lots of water and oxygen in the anode separation tank in an electrolyzer. I’m not thinking of in the membrane.

1 Like

In a previous post, I showed the result I found for the temperature evolution in the introductory example in Molly.jl:

In this example, the Anderson thermostat is used. To me, it was a little bit surprising that the temperature varied so much from the specified reference temperature of 100K.

My understanding of how the Anderson thermostat works is as follows. There is an assumption of particles colliding with a heat bath with a certain (user specified) frequency. At each specified time step (e.g., 1.0u"ps" in coupling=AndersenThermostat(100.0u"K", 1.0u"ps")), for each particle, a random number is drawn to determine whether the particle has collided with the heat bath or not. Those particles that are deemed to have collided have their velocities re-set at random from the Maxwell-Boltzmann distribution at the given temperature). Those particles that are deemed to not have collided, keep their velocities.

Being an engineer, I am wondering whether this could have been done differently (I know there are other thermostats). I would think that the goal must be to steer the system to the specified temperature T_ref?? If so, why not express the velocities v as (1+u)*v instead of as v, and use a PI controller to find u so that T → T_ref (100K in this case)? Thus, instead of having u=0 and using the Anderson thermostat, why not compute u from

u = Kp*(1+Tis)/(Tis)*(T_ref - T)

or rather a discrete time equivalent?

I would guess that with such a “PI-thermostat”, tuning Kp and Ti would make T approach T_ref. [Of course, in this case, the velocities would be (1+u)*v, and not v.]

Thermostats that just rescale the velocities work, but, they generate the wrong ensembles. See for example discussions about the Berendsen thermostat, which is frequently used for thermal equilibriation in MD simulations.

1 Like

The canonical (pun intended) equation for the energy fluctuations at a constant temperature is
\left\langle \Delta E^2 \right\rangle = k_BT^2C_V. For the kinetic part \left\langle \Delta K^2 \right\rangle = k_BT^2(\frac{3}{2}Nk_B), or \left\langle \Delta T^2 \right\rangle = \frac{2}{3}T^2 / N. So, for 100 molecules you should expect ~10% temperature fluctuation.

Keep in mind that MD software typically gives you scaled kinetic energy, not temperature. The only “true” temperature is of the thermostat the system is coupled to.

2 Likes

Suppose I do an MD simulation with a fixed volume (fixed boundary box) and a fixed number of molecules and a specified thermostat temperature, and the result is a fluctuating pressure and a fluctuating temperature.

If I consider a pVT-relationship, should that be between the computed (fluctuating) pressure, the fixed volume per particle, and the fluctuating temperature? And – since these should not really be fluctuating if N → infinity, and I average over some time: should I choose the mean pressure and the mean temperature in the simulation for a pVT relationship, or should I choose the pressure at the end time and the temperature at the end time?

The pVT relations assume ensemble-averaged values. For the instantaneous values, they only hold on average. There are cross-fluctuations as well, like \langle \Delta P\Delta T\rangle.
As you’ve correctly noticed, the fluctuations of intensive thermodynamic functions vanish in the thermodynamic limit, but for the intensive ones they don’t and that’s important for some applications like Green-Kubo equations for transport properties.
So, you always should take the averages when comparing to classical thermodynamics. The value at the end of run is supposed to be as good as any other value along the trajectory.

4 Likes

Since the ensemble average in general may be different from the time average, I assume this means that I should assume that the MD system is ergodic so that they are equal. For ergodic systems, I guess the time average in principle should be over infinite time, so there should be no trend in the time evolution?

2 Likes

Exactly. Moreover, if you see a drift of parameters over time without applying some external forces, then you should be suspicious. It may be unphysical effect due to wrong choice of integrator, time step, potential model etc.

2 Likes

Ah… According to MS Co-pilot, the Nosé-Hoover thermostat works as follows:

"Nosé-Hoover introduces an extra variable \xi that acts like a friction coefficient, dynamically adjusting particle velocities to maintain the target temperature.

The modified equations of motion are:

\frac{\mathrm{d} r_j}{\mathrm{d}t} = v_j \\ \frac{\mathrm{d} v_j}{\mathrm{d}t} = - \frac{1}{m_j}\frac{\partial U}{\partial r_j} - \xi \cdot v_j \\ \frac{\mathrm{d} \xi}{\mathrm{d}t} = \frac{1}{Q}\cdot \sum_{j=1}^N\left( m_j v_j^2 - 3Nk_\mathrm{B}T \right) "

But this is precisely an Integral action controller with control signal \xi, and integral gain \kappa_\mathrm{I} = 1/Q (Q is the thermal “mass” of the thermostat).

According to MS Copilot, the Nosé-Hoover thermostat preserves the Maxwell-Boltzmann distribution of velocities. The “downside” mentioned is that the result may be non-ergodic for small systems, whatever “small” is.