Function parameter estimation similar to Octave fminsearch

I’m trying to solve a simple parameter estimation problem in Julia similar to how fminsearch works in Octave/Matlab. The RAFF.jl package seemed like a nice approach, but perhaps there’s a better way?

For context, I have a photo of several lampposts in a public square that I know are all the same height (H), but I don’t know what the height is, and I know nothing about the camera, but I’m assuming that each pixel extends over a small angle such that there are r pixels per radian. I know the distance (Dst) from the camera to each lamppost so the problem is find r and H that minimize the difference between the tangent of the angle and H/Dst,

\underset{r,H}{\operatorname{argmin}} f(n,Dst) \sum_{i=1}^k \left( \tan\left( \frac{n_i}{r}\right) - \frac{H}{Dst_i} \right)^2.

In Octave (n,Dst defined below):
f = @(x) sum((tan(n/x(1)) - x(2)./Dst).^2);
C = fminsearch(f,[1500,12]) = [1503.360 12.900]

The solutions generated by RAFF don’t match the Octave solution, so I’m guessing that either I’m using the wrong package, or not setting RAFF up properly. Here are the results:

# Data inputs to the problem
n = [765.0, 332.0, 217.0, 165.0, 283.0, 297.0, 814.0, 434.0, 289.0];
Dst = [22.71666667, 45.64, 71.58333333, 90.22666667, 59.63, 59.33666667, 23.82333333, 46.88333333, 72.88];

# Write data in format useful for RAFF.jl
x = [n Dst zeros(9,1)];

# Function to be minimized over parameters p
f(x,p) = (tan(x[1]/p[1]) - p[2]/x[2])^2
# Jacobian
	gmodel!(g,x,p) = begin
		g[1] = 2*x[1]/p[1]^2*sec(x[1]/p[1])^2*(tan(x[1]/p[1]) - p[2]/x[2])
		g[2] = 2/p[2]*(tan(x[1]/p[1]) - p[2]/x[2])

# Fit using RAFF
fit = raff(f,gmodel!,x,2; initguess = [1000.0, 10.0], ε=1.0e-10)

# Output (solution only)
Solution (.solution) = [1000.0, 10.0]

# Without Jacobian
fit = raff(f,x,2; initguess = [1000.0, 10.0], ε=1.0e-10)
Solution (.solution) = [56821.055712178044, 0.31544255532138893] 

# No initial guess
fit = raff(f,x,2)
Solution (.solution) = [-0.9778814405815958, -3.5093908435870933]

# Function re-written to use Dst as first column in x, n as second column (no zeros)
f1(x,p) = p[1]*atan(p[2]/x[1])
Solution (.solution) = [477.50199698046083, 39.851464869409654]

# Function re-written to use n as first column in x, Dst as second column
f2(x,p) = p[2]/(tan(x[1]/p[1]))
Solution (.solution) = [-0.8529985482107982, 5.582472558856189]

from the fminsearch Octave docs:

: x = fminsearch (fun, x0)

Find a value of x which minimizes the function fun.

The search begins at the point x0 and iterates using the Nelder & Mead Simplex algorithm (a derivative-free method). This algorithm is better-suited to functions which have discontinuities or for which a gradient-based search such as fminunc fails.

Options for the search are provided in the parameter options using the function optimset . Currently, fminsearch accepts the options: "TolX" , "MaxFunEvals" , "MaxIter" , "Display" . For a description of these options, see optimset .

so, Octave fminsearch uses Nelder Mead to find the optimum. RAFF.jl, on the other part:

This package implements a robust method[1] to fit a given function (described by its parameters) to input data. The method is based on the LOVO algorithm [1] and also in a suitable voting strategy in order automatically eliminate outliers.
[1] Castelani, E. V., Lopes, R., Shirabayashi, W., & Sobral, F. N. C. (2019). RAFF.jl: Robust Algebraic Fitting Function in Julia. Journal of Open Source Software, 4(39), 1385. Journal of Open Source Software: RAFF.jl: Robust Algebraic Fitting Function in Julia
[2] Andreani, R., Martínez, J. M., Martínez, L., & Yano, F. S. (2009). Low order-value optimization and applications. Journal of Global Optimization , 43(1), 1-22.

if you want a similar result to fminsearch, you need a package that provides Nelder Mead, some options:

This is a least-square fitting problem, for which there are a couple Julia packages: GitHub - JuliaNLSolvers/LsqFit.jl: Simple curve fitting in Julia, GitHub - matthieugomez/LeastSquaresOptim.jl: Dense and Sparse Least Squares Optimization … e.g. try the Levenberg–Marquardt algorithm in LsqFit.jl.

fminsearch in Matlab or Octave is a generic derivative-free optimization function using the Nelder–Mead simplex algorithm. There are many, many general optimization packages in Julia: GitHub - JuliaNLSolvers/Optim.jl: Optimization functions for Julia, GitHub - JuliaNonconvex/Nonconvex.jl: Toolbox for non-convex constrained optimization., GitHub - JuliaOpt/NLopt.jl: Package to call the NLopt nonlinear-optimization library from the Julia language, GitHub - robertfeldt/BlackBoxOptim.jl: Black-box optimization for Julia, GitHub - emmt/OptimPack.jl: Julia bindings for OptimPack, a library for solving large scale optimization problems, not to mention higher-level “modeling” packages like GitHub - jump-dev/JuMP.jl: Modeling language for Mathematical Optimization (linear, mixed-integer, conic, semidefinite, nonlinear). There are several implementations of Nelder–Mead among those packages, but in general I would tend not to use Nelder–Mead as my first (or second, or third) choice of algorithms these days.


No you don’t. You just need a package that is solving the same (nonlinear least-squares) optimization problem [*]. RAFF is solving a different “LOVO” optimization problem that generalizes least-squares by eliminating “outlier” data points, and hence will yield a different result.

[*] With the caveat that if your problem is nonconvex with multiple local minima, then different algorithms, different implementations of the “same” algorithm (e.g. different initialization strategies for Nelder–Mead), and different starting points may yield different minima. But with a suitable starting point any convergent local minimization algorithm should be able to locate the fminsearch minimum.


Thanks for the links. I only chose fminsearch in Octave because it was simple to implement and I was hoping to find something similar in Julia. After trying several of your suggested packages, I decided that the problem itself is ill-posed, especially with any measurement errors, so I’m rewriting the problem rather than the method of solution.