Yes; it looks like the problem with your code is that Optim.optimize
is expecting a vector of initial guesses, not a scalar. Try passing [b]
instead of b
.
However, this is a root-finding problem g(b) = 0
. It is almost always a mistake to try to solve a root-finding problem by using an optimization algorithm to minimize |g|²
. You should use a root-finding algorithm like those in NLsolve.jl.
Moreover, in this case you know the derivative analytically, as I mentioned above, in which case you should certainly provide it to the root-finding algorithm.
However, for a 1d root-finding problem (b
is a scalar) with an analytically known derivative, all of the generic root-finding packages will basically boil down to a Newton iteration like the one I wrote. The clever algorithms are mainly for the case where you don’t know the derivative, or have lots of derivatives (a big Jacobian) and can’t affort to compute them all or to invert the Jacobian.
Alternatively, in 1d, especially for smooth functions, there are often more clever algorithms available. For example, you could use ApproxFun.jl to construct a high-accuracy polynomial approximation of your integrand f, call cumsum to compute its integral, and use the ApproxFun.roots
function to find all of the places where the integral equals k
.