Bounded optimization without prior knowledge of bounds

I need to optimize a function f(x) over some region A. The function returns an error if evaluated outside the region. However I do not know the region A beforehand. I can only tell whether x' \in A by evaluating f(x') and checking whether it returns an error.

Given this, would the following procedure give me the argmax over A? Here is a pseudo algorithm.

  1. Begin by evaluating f at a guess x_0.
  2. If f(x_1)<f(x_0), update x to x_1. If f(x_1) returns an error, define f(x_1) = 10^5 or some other arbitrarily large number and continue searching by guessing another value for x.

Here is an example of some Julia code to illustrate the issue. In words, I am trying to optimize function objective_function(y1, y2) over y1, y2. For every guess of y2 I get y1 through a contraction mapping. However the contraction mapping may fail to find a fixed point for certain values of y2. When this happens I assign an arbitrarily large value to the objective function and continue with the optimization.

##  Define functions
function function_to_be_contraction_mapped(x1, x2)
   # return something
end

function objective_function(y1, y2)
   # return something
end

function contraction_mapping(a)
   y = zeros(Float64, length(a))
   fp_anderson = fixed_point(x1 -> function_to_be_contraction_mapped(x1, a), zeros(Float64, 10); Algorithm = :Anderson)
         
   if fp_anderson.FixedPoint_!==missing
      return objective_function(fp_anderson.FixedPoint_, x2)
   elseif fp_anderson.FixedPoint_===missing
      return 1E6
   end
end

## Optimize
Optim.minimizer(contraction_mapping, zeros(Flat64, 10))

A sophisticated strategy roughly similar to the one you mention (but with mechanisms that provide certain convergence guarantees) is implemented in NOMAD.jl.

A model can be used to construct at each iteration that may suggest new promising iterates faster than the standard search strategy.