That sounds similar to a “multistart” algorithm (for which my favorite is MLSL). However, if only only do a local optimizer on one point at the end, and you filter the initial points heuristically, you are not guaranteed to get a global optimum in general.
As for Nelder–Mead (which has a big theoretical problem: it is not even guaranteed to converge to a local optimum), I’ve personally never found a case where it was the best derivative-free local search algorithm, compared to the other derivative-free algorithms in NLopt. YMMV, of course.
(The paper you cite found N–M to be the best, but they only compared it to two other local-optimization algorithms I wouldn’t typically reach to as my first choices. Indeed, one was the derivative-based algorithm DFPMIN from Numerical Recipes that they presumably used with finite-difference gradients(?), although they don’t explain how they used it. They are also comparing local-search algorithms on global optimization problems, which is kind of weird and it’s not clear to me how such a comparison is useful.)