Opening a fresh thread based on the recommendation from @pierre-haessig.
My original question was:
How does the Genetic Algorithm implemented in Metaheuristics.jl “penalize” the objective function for not satisfying the constraints? From what I gather, it doesn’t use penalty method. I believe it uses some sort of a penalty-free method by simply comparing the individuals based on (1) their average constraint violations (i.e., \frac{1}{N_{g}} \sum_{i = 1}^{N_{g}} \max{(0, g_{i}(x))} + \frac{1}{N_{h}} \sum_{j = 1}^{N_{h}} |h_{j}(x)|, where g_{i}(x) are inequality constraints and h_{j}(x) are equality constraints) and (2) their objective function values. Is that true? If not, how does it handle the constraints then?
Really would love to know all the details of the Genetic Algorithm implemented in Metaheuristics.jl as I’m writing a journal article that uses the package.
Thank you for tagging me. However, I only collaborated on the implementation of our method MCCGA (Machine-coded compact genetic algorithm) and don’t know how the classical genetic algorithm is implemented in the package; @jmejia8 might be able to provide more useful information.
GA and the other unconstrained metaheuristics use the constraint violation average to firstly compare solutions. If solutions are either feasible or have the same constraint violation sum, the objective is used to compare them.
Thank you very much for your response, @jmejia8! Now it makes more sense.
Do you know how this type of constraint-handling method is called in the literature? If you used particular references during the implementation, could you please provide them so that I could cite in my article?
To solve my integer program with nonlinear constraints I used SBX() method for crossover and PolynomialMutation() method for mutation. If you have any preferred references that you used during the implementation of these methods, could you please provide them as well? Thank you in advance.