I am 100% with you. Replication is essential.
But the other thing I have found is that by using more robust optimizers it is often easier to get a sense of the domains of attraction from initial conditions (and, frequently, problems in my models which need to be fixed or corners that I didn’t anticipate would ever bind).
In particular, you can tell these optimizers to emphasize feasibility over optimizality and then it can carefully walk along the F(a,p) = 0 manifold which might be essential for the stability of your other code. Another thing you can do is fix an initial
a and tell it to find a feasible point without worrying about the objective. Once you know “the solution” you can backwards engineer conditions to guide the less robust optimizers to get to it and drop the commercial one.
But with all that, I completely understand your aversion to experimenting with commercial optimizers. And I suspect that black box derivative-free ones are never going to work with a problem of this type because feasibility on the constraint is too important.