I’m on a quest to approach the hobby of astrophotography from a data science perspective. I’ve put several restrictions on myself (all processing is done with free software, no machine learned tools are allowed) because necessity is the mother of invention. And, honestly, I prefer my image processing tools to not be black boxes or subject to biases from training data.
I’ve found myself a bit dissatisfied with the denoisers that are readily available with the software I use, and I thought it would be interesting idea to not only implement described denoising methods in Julia, but try to build a denoising tool that is specifically designed for astrophotographers and their unique imaging techniques and processing needs.
The method I want to develop should:
assume the image is linear. This would be something run before stretching the image.
deal with shot (Poisson) noise as the dominant noise contribution. Most denoising algorithms are designed with additive white Gaussian noise in mind, which is not a good model for the noise in an astrophotograph. However, sky backgrounds (and thus the magnitude of shot noise) can vary due to differences in skyglow (the moon, light pollution, etc.).
be maximally informed by the collected data. We have access to not just an image, but many images (including calibration frames), so we can work with more statistical descriptors (such as pixel variance/standard deviation, skewness, etc.)
account for properties of the optical system, if known. It’s very easy to construct a point spread function from images of stars, or even from just knowing what optical system was used. Some denoisers (like TV) are good at preserving edges, but that may not make a lot of sense in the context of an oversampled image, where there is a limit to how sharp an edge can be.
I’ve been working on an implementation of total generalized variation denoising since it’s a pretty reliable method , and so far it kinda works? If I allow too many iterations, my stars turn into literal button cells though (probably because the stars are being treated as discontinuities in the image rather than smoothly varying, or more likely because I’ve done something very wrong in my implementation - any insights are appreciated):
I do find it interesting that it seems to blow all stars up to roughly the same size, regardless of their magnitude. Either way, I feel like TGV denoising is a good standard to hold any new method to.
And if you check this simple example you can do all sorts of custom deconvolutions (many images as measurements → one reconstruction).
In general, any sort of auto-differentiable framework where you describe the forward pass physically (such as convolution with a PSF) can be solved under different noise scenarios by choosing appropriate loss functions.
Often, low-light imaging can be better-fit by Poisson statistics. One big difference from a Normal distribution is the intensity-dependent noise (separate from your shot noise). One way to mitigate this is to use a Poisson maximum log-likelihood objective function rather than least squares (maximum log-likelihood Normal mean estimation). It’s still a convex objective, but one that requires an iterative solution (although TV already does). An easier approximation is to use a variance-stabilizing transformation, in this case taking the per-pixel square root of the image, and then using least squares.
I don’t know anything about total generalized denoising, but vanilla TV is convex and so converges to a solution and doesn’t have issues with too many iterations. Skimming the first page of a web search, it isn’t obvious to me that TGV is nonconvex either. Assuming it is convex, the fact that your TGV has issues with too many iterations suggests that you might have an implementation issue (possibly with step size?). I couldn’t grok the source code very quickly so I couldn’t tell from looking.
There is a whole JuliaAstro organization that maintains many packages relevant to astronomy. These may be geared more for “professional” uses (accurate photometry, astrometry, etc.) and less so towards making pretty images, but I would check them out. This package comparison page has a huge list of astronomy-related packages in Julia (and Python, but ignore those ). Mabe you will find something that will help your case.
A first look may be to try CCDReduction.jl, AstroImages.jl, and ImageFiltering.jl. (Note that ImageFiltering is part of the JuliaImages group, not JuliaAstro).
For reference, I have no formal background in statistics (I’m a chemist - it turns out we can get away with not taking statistics or even linear algebra), so I’m drinking from a fire hose here, but if I can reduce noise with a deconvolution method (perhaps also by using TGV regularization?) I’m definitely interested in that. But an understanding of that math is still outside of my grasp at the moment.
After taking a quick glance at your paper though, it’s pretty refreshing to read something that isn’t a fire hose…
There are other deconvolution methods used in astrophotography (split Bregman, for instance) but they do not handle stars in a linear image very well and generate awful ringing artifacts. They’re great for planetary imagers though.
I reimplemented TGV denoising from the aws R package, but I’m bound to have made some mistakes. Additionally, I noticed that it would generate artifacts at the image edges, but I’ve solved that by using central differences everywhere.
There is also the strong possibility that I’m not choosing reasonable regularization parameters for the data, becuase the image in a linear state looks like this (it’s a 32-bit floating point image with mean pixel value of about 0.0005) and what I showed initially was a stretched preview. The original image really looks like this:
I haven’t had the chance to really experiment with the parameters, since performing 100+ iterations takes a while even on my beefy hardware and with some performance improvements I’ve made (mostly centered around limiting allocations). I should try an Anscombe VST on the image though.
That’s not unexpected. Although regularization can counter it to some degree, deconvolution is fundamentally about improving the contrast at the expense of more noise.
Yeah, that’s why I treated PSF-informed denoising as something different from deconvolution.
Total variation denoising tries to preserve edges at the expense of staircasing where there are gradients. This is fine for certain kinds of images, but when the image is blurred by a PSF (especially if it’s oversampled), there is a limit to the magnitude of the gradient defining an edge, assuming that the image has a bound on maximum brightness per pixel. So having a denoising method informed by a PSF estimate (whether or not it’s constructed like TV) would be interesting.
Of course, this might already exist and I just haven’t scoured enough literature to find it.