Here is code for Julia and Python that uses the basic approach I’m trying. I’m attempting to use equivalent algorithms which is Levenberg-Marquardt for the curve fitting and linear interpolation for the input fits. Here is the code and results. As you can see, Python executes the curve fit in about 1.4 ms and Julia in about 30.8 ms. I’m quite new at Julia so please point out bad practices even if it doesn’t impact performance. Everything was run in JupyterLab with Python 3.7 and Julia 1.1
Julia
using LsqFit, Interpolations, BenchmarkTools
# Create sample data
x1 = range(0.0, step=0.0099, stop=6.);
y1 = 0.7*sin.(x1) .+ 0.1*rand(length(x1));
x2 = range(0.0, step=0.001, stop=6.);
y2 = 0.7*cos.(x2) .+ 0.07*rand(length(x2));
x3 = range(0.1, step=0.0015, stop=5.9);
y3 = 0.5*sin.(x3) .+ 0.3*cos.(x3) + 0.1*rand(length(x3));
# Create fits
fit1 = interpolate((x1,), y1, Gridded(Linear()));
fit2 = interpolate((x2,), y2, Gridded(Linear()));
# Create Model
model(x,p) = p[1]*fit1(x) .+ p[2]*fit2(x)
# Do curve fit
@btime rst = curve_fit(model, x3, y3, [1.0, 1.0]);
rst.param
30.766 ms (2128 allocations: 18.46 MiB)
2-element Array{Float64,1}:
0.726988264770365
0.43123727421300484
Python
import numpy as np
from scipy.optimize import leastsq # Uses Levenberg Marquardt
# Create sample data
x1 = np.arange(0.0, 6.0, 0.0099)
y1 = 0.7*np.sin(x1) + 0.1*np.random.randn(len(x1))
x2 = np.arange(0.0, 6.0, 0.001)
y2 = 0.7*np.cos(x2) + 0.07*np.random.randn(len(x2))
x3 = np.arange(0.1, 5.9, 0.0015)
y3 = 0.5*np.sin(x3) + 0.3*np.cos(x3) + 0.1*np.random.randn(len(x3))
# Create fits
def fit1(x):
return np.interp(x,x1,y1)
def fit2(x):
return np.interp(x,x2,y2)
# Create Model
def model(x, p):
return p[0]*fit1(x) + p[1]*fit2(x)
def residuals(p, y, x):
err = y-model(x,p)
return err
# Do curve fit
rst = leastsq(residuals, [1.0, 1.0], args=(y3, x3), maxfev=2000)
print(rst)
%timeit leastsq(residuals, [1.0, 1.0], args=(y3, x3), maxfev=2000)
(array([0.70528424, 0.41814 ]), 3)
1.38 ms ± 277 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)