Background: I’m trying to fit a 2D symmetric Gaussian to a an image of a point source. So I have an image of a light source, it’s Gaussian in shape, and I want to find it’s exact center and get a measure of how good the fit is. I plan to log the intensities and fit a symmetric paraboloid to the data. To compensate for the over represented tails of the Gaussian, I want to weight the data with itself (so low intensities will have low weights and vice versa). Here’s what it looks like (artificially enlarged):
Question: In order to fit a linear model to weighted data I assume I need to use MultivariateStats.llsq, but I can’t really see how. What is the way to fit multiple linear regression to weighted data in Julia?
I found tons of functions in StatsBase that act on the subtypes of RegressionModel, but not one function that produces any instances of its subtypes…
How is this a regression? My understanding is that you are just fitting a multivariate normal.
If you want to deal with non-normal tails, the right approach would be something that allows for that, eg a multivariate t. Distributions.jl has it implemented, so you have the log likelihood, but AFAICT it has no fit_mle method for it; you can do the MLE yourself with a generic optimization package (eg Optim.jl). You may get away with fixing the \nu at some value, eg 4 or 5.
I found my copy of the Kotz-Nadarajah book about multivariate t at home, and apparently for a known \nu, there is a closed form for the bivariate case. See
@article{tiku1992estimation,
title={Estimation and hypothesis testing for a new family of bivariate nonnormal distributions},
author={Tiku, ML and Kambo, NS},
journal={Communications in statistics-theory and methods},
volume={21},
number={6},
pages={1683--1705},
year={1992},
publisher={Taylor \& Francis}
}
Apparently, if I am reading it correctly, the sufficient statistics are independent of \nu. So you can very quickly try various degrees of freedom and just pick the best fit.
I’m sorry to admit that your help was wee bit over my head and lead me down a different path:
I can fit a normal distribution to the coordinates of the pixels and use the intensities as the weights. This works perfectly when there is no noise. It breaks down completely if there is noise and/or if the image is much larger than the scale (say the FWHM or sigma) of the distribution.
Here’s a quick and (really) dirty example code:
using Plots
gauss(x,y,mux,muy,sig) = exp(-((x-mux)^2 + (y-muy)^2)/(2*sig^2))
sig = sqrt(3)
mux = 4.5
muy = 6.
f(x,y) = gauss(x,y,mux,muy,sig) + rand()/10
x = Float64.(collect(1:10))
y = Float64.(collect(1:10))
X = x .+ 0*y'
Y = 0*x .+ y'
z = f.(X,Y)
h1 = heatmap(x,y,z)
using Distributions
d = fit_mle(IsoNormal, hcat(vec(X), vec(Y))', vec(z))
z2 = [pdf(d, [i, j]) for (i,j) in zip(X, Y)]
h2 = heatmap(x,y,z2)
z *= sum(z2)/sum(z)
plot(h1,h2)
norm(z - z2)
First, gauss is \propto a pdf, why are you adding random numbers to it? This does not make much sense. To generate your random variates, you would draw from a distribution, then bin them like the image. What you are doing is closer to some non-standard overdispersed normal.
For binned data, your log likelihood function can be approximated as
\sum_i n_i \ell(x_i \mid \theta)
where \ell(x \mid \theta) is the log likelihood for the distribution, x_i is the center of each bin, n_i the count, and eg \theta = (\mu, \Sigma) are the parameters.
For normal, there is a closed form \theta that maximizes this (deriving this is a good exercise, you just get weighted means and variances, which you can calculate with mean and cov from StatsBase, with n_i as the weights — see WeightVec).
For multivariate-t, you need the method I mentioned. But now I am not sure you actually need it; you just get the wrong estimates because you are not doing the right thing and simulating something other than a multivariate normal.
Let me be clear and try a more correct way of generating some test data:
using Images, ImageView, Distributions
d = MvNormal([1,2], 2)
img = centered(zeros(13,13))
for rep in 1:1000
i, j = round.(Int, rand(d))
if checkbounds(Bool, img, i, j)
img[i, j] += 1
end
end
img = imadjustintensity(img)
imshow(img)
Given an image similar to the one I generated above, is there a way to estimate the mean and variance of the original isonormal (what I called symmetrical Gaussian) distribution?
using Images, Distributions, StatsBase
d = MvNormal([1,2], 2)
img = centered(zeros(13,13))
for rep in 1:1000
i, j = round.(Int, rand(d))
if checkbounds(Bool, img, i, j)
img[i, j] += 1
end
end
# your code up to here, except I am not loading ImageView because I don't
# have it installed, and I don't adjust the image, I don't know what it does
mid_x = indices(img, 1)
mid_y = indices(img, 2)
x = repeat(mid_x; outer = length(mid_y))
y = repeat(mid_y; inner = length(mid_x))
μ1 = mean(x, Weights(vec(img)))
μ2 = mean(y, Weights(vec(img)))
Σ = var(hcat(x, y), Weights(vec(img)), 1)
You can probably make the construction of x and y more sophisticated, but this is the gist.