I’ve got an objective function to maximize. The domain is {\mathbb{R}}^n, where n is about thirty. The codomain is a bounded subset of the nonnegative reals. The objective value is zero for most of the domain. The objective may only be positive within a known small subset of the domain.
I don’t strictly need the global maximum, but I want to find as good of an optimal value as I can manage.
The objective is not a black box, it’s Julia code I’ll write, but I somehow don’t think it’d be very amenable to autodiff, so I guess I’ll have to use finite differences. I can’t share the objective.
The objective has discontinuities where the value jumps from zero to a positive value. There also may be some nondifferentiable regions within the region where the objective may be positive, not sure yet.
I suppose there are regions within the domain where the function is hopefully differentiable, at least in some of the thirty-ish variables, so I’m hoping something like gradient descent could be helpful to improve whatever objective value I find by random uniform sampling. Does this make sense?
Questions:
- I can’t plot the objective, given that there are about thirty dimensions, is there some numerical or other way to assess how helpful gradient descent could be here?
- What Julia packages could be helpful? There are quite a lot of registered packages doing gradient-based optimization.