I have been working on fitting some non-linear models, and have been using Optim. For models where the parameters space is bounded, one can obviously use Box Constraints. Before I read how to this, however, I had been trying to achieve a similar effect by having the objective function return `NaN`

any time any of the parameters were out of bounds. This also allowed for more complex restrictions, (eg, x_1 + x_2 <= 1, etc).

In some cases, this NaN method seemed to allow for faster convergence, which surprised me, and led me to wonder if the approach was sound. My concern is it will cause the optimizing function to count iterations that aren’t returning a value. However, I don’t know enough about the internals of how the optimizers are constructed, and if this approach is a hack, or fairly robust wrt to the stability of the estimated model params?