How to use `NonlinearSolve` to perform least squares fitting?

The problem in the provided snippet (which is different from the stack trace you show, which I guess comes from a different iteration of the script) is that you’re preallocating ys as an Array{Float64}, which is correct when you call the function with Float64 inputs. However, when you pass this function to a NonlinearSolve solver, it attempts to differentiate it, which defaults to a method based on automatic differentiation and is provided by the ForwardDiff.jl library. This library calls the function with arrays of a different element type, a Dual number, containing the derivative information (if you’re not familiar with this topic look up “forward-mode automatic differentiation”). The code then fails when you demand the storage of the Dual number into a Float64 array. If you’re OK with using ForwardDiff, you can bypass this issue by using a more “functional” approach in which you don’t preallocate anything, but rather let the compilation process find out which array is created depending on the input types, e.g.

function model(pts2d, p)
    γ0, V1 = p
    map(eachrow(pts2d)) do row
        f = f_gen(row)
        real.(sin.([V1 - γ0 * conj(f), V1 - γ0 * f]))
    end |> Base.Iterators.flatten |> collect
end

Once you get the script working, there is some stuff that can be done as well to improve performance, if that matters to your project.