I need to say in advance I don’t have much expertise in optimization, so it was difficult for me to phrase the post title properly, but please bear with me. Here is the problem we have and we are trying to solve:

As you can see, we have some special function f(u) that given a state space point (a D-dimensional real valued vector) it returns an integer. For a given point we can compute this integer, but we cannot compute a-priori the color coding you see on the picture because it is computationally too expensive. We are interested to find the perturbation vector v with the smallest possible norm that when added to a starting point u0 it will lead to a different color for the function f(u0 + v). The figure shows three inputs v1, v2, v3, one of which is the solution to the problem, the vector with minimum norm that still

Is there any algorithms anyone can recommend that could help us achieve such a “weird” optimization? The way we currently solve this problem is by initializing random vectors in a hypersphere of radius r and scan until we find a perturbation vector v that leads to a different color. Then we reduce the radius of the sphere and we repeat until we find a radius where all perturbations with this radius will lead to the same color.

The function f does not have an analytic expression. It is something we can estimate numerically via some nonlinear dynamics algorithms. Additionally, f is different if we apply this methodology to a different system. We want to write a generic optimization algorithm that works for any f that returns integers.

An example f function is the color map of the image I have pasted.

Your current solution strategy resembles a feasibility problem problem in optimization where you try to search all v for which f(u_o) \neq f(u_o + v) no matter the starting point u_o and then pick the one with smallest norm in a post-processing step. Depending on the shape of f we can recommend different solution strategies, but if it is a black box function, I think you will be better off with randomized algorithms.

No, f is not differentiable. I don’t consider f a function in the mathematical sense, only on the computer science sense. f itself is a very complicated algorithm that tells you to which attractor (in the context of dynamical systems) an initial condition u ends up. This is encoded by the integers. So, from the perspective of dynamical systems, we are searching for the minimal perturbation that will bring the dynamical system to a different attractor (different color/integer) that we started from.

I am totally happy with randomized algorithms, I was just being hopeful that there are more efficient ways to do this randomized search… I know that it is possible to make the search more efficient by enforcing system knowledge, i.e., providing that f has a specific structure. But I was wondering what can be done without prescribing knowledge about f…

Is it “differentiable” in the broad sense: can we compute gradients of f with automatic differentiation? Julia can compute gradients of very complicated algorithms sometimes.

I do not believe it is differentiable in the broader sense either unfortunately… I can tell you for sure that it is not a smooth function, but it is a fractal function. I have never tried to differentiate f to be honest but I don’t see how it would work.

I don’t think there’s really much you can do without more knowledge. E.g. if the scenario is just that you have f: \, \mathbb{R}^D \to \mathbb{Z}, and only have “oracle-access” to f (i.e. you can evaluate it for a given point but have nothing else), then it could be basically impossible. E.g. if f(x) = x == [0.123,0.123] ? -1 : 1 and u_0 = [0, 0] say, the only way to find that special point [0.123, 0.123] would be to guess it exactly.

So I think to have any chance, you need some more properties of f; maybe not the exact structure but at least more mathematical properties to let you limit the search space in some way.

You could try using a good derivative-free solver that handles constraints.
Your objective would be the norm of v and the constraint function would be something like c(u) = Float64(f(u0) != f(u0 + v)). GitHub - bbopt/NOMAD.jl: Julia interface to the NOMAD blackbox optimization software comes to mind. The method it implements is more robust than Nelder-Mead, so there is a better chance of it working, and the it works with “Extreme constraints”, i.e., it’s either feasible or not, which is your situation.