# Optimizing noisy objective

An update: I have programmed a version of SPSA and indeed it is working fine (after some tuning). Thanks, @dlakelan! Of course I will make it public soon, after including some more tests.

I am wondering about the following: suppose that for a given parameter \theta, I am simulating N individuals to obtain some average statistic

\bar{h}(\theta) = \frac1N \sum_{i = 1}^N g(\theta)

and my objective is some \bar{h}(\theta) - h_0.

Intuitively, it would make sense to use a low N for he initial phase for finding the right region, and a large N for the later phase, because in the beginning I care more about convergence than accuracy, then I ran refine later. Eg 10k vs 100k seems to work well. Is there any systematic, theoretically supported way of doing this, eg an N(k) function scheme?

2 Likes

glad you got it working, itās a very clever algorithm. I donāt know about the idea of N(k) it seems this would be very problem specific. For example suppose your simulation is just this side of having infinite varianceā¦ N will need to be very large. On the other hand if your simulation is well behaved it could be much smaller.

Central Limit Theorem should give some idea for what N should be in order to bound the error in \bar{h}(\theta) at a certain level, especially if the variance of g(\theta) does not change a lot with \theta.

Are you getting better results with your SPSA than nloptās CRS2?

Iāve been working on some higher dimensional problems where CRS2 currently has the best convergence, so Iām curious to try your SPSA when itās ready.

It appears that https://github.com/robertfeldt/BlackBoxOptim.jl has an SPSA implementation.

4 Likes

Are there any new developments on this type of problems?

I am currently using @Tamas_Papp MultistartOptimization.jl which works like a charm, but I am always interested intrying new things.