Compressed Sensing using StructuredOptimization.jl

Here is something cool I did: gist

using LinearAlgebra
using Random
using FFTW
using PyPlot
using StructuredOptimization

N = 256
P = 5
K = 32

# Pick out P random frequencies
freq = randperm(Int(N/2)).-1
freq = freq[1:P]

# Construct Signal
n = 0:N-1
x = zeros(N)
for f in freq
    x .+= sin.((2pi*f/N)*n)
end

# Construct DFT Matrix
Psi = hcat(fft.(eachcol(I(N)))...)
Psi_inv = conj(Psi)/N
X = Psi*x # Fourier Components

# Pick K Random Data
x_m = zeros(N);
q = randperm(N);
q = q[1:K];
x_m[q] .= x[q]

#Construct system of linear equations
A = Psi_inv[q, :]; 
y = A*X;  #  y === x_m[q]

# Use StructuredOptimization.jl to optimize the system with a L1 norm
xx = Variable(Complex{Float64}, N)
lambda = 1e-3*norm(A'*y,Inf)
@minimize ls(A*xx - y) + lambda*norm(xx,1)

#Plot it
fig, ax = plt.subplots(ncols=2)
fig.set_size_inches(12,4)
ax[1].plot(1:N,x,label="True Signal")
ax[1].set_xlabel("Time")
ax[1].set_ylabel("Amplitude")
ax[1].plot((1:N)[q], x_m[q],"x",label="Data Points")
ax[1].plot(1:N, Psi_inv*(~xx),ls="--",label="Reconstructed Signal")
ax[1].legend()

ax[2].plot(abs.(X),label ="True Frequency")
ax[2].plot(abs.(~xx),ls="--",color="green", label="Reconstructed Frequency")
ax[2].set_xlim(1,N/2)
ax[2].set_xlabel("Frequency")
ax[2].set_ylabel("Amplitude")
ax[2].legend(loc=2)

7 Likes

I would love to have a blog of compressed sensing examples! I could probably help contribute 1 or 2. Have you seen GitHub - JuliaFirstOrder/ProximalOperators.jl: Proximal operators for nonsmooth optimization in Julia ?

I have implemented some sparse spectral estimation methods here

2 Likes

We are working on compressed sensing in the field of image reconstruction. This package contains the basic solver GitHub - tknopp/RegularizedLeastSquares.jl where one can plug in different imaging operators and sparsity constraints. This is used in our MRI package GitHub - MagneticResonanceImaging/MRIReco.jl: Julia Package for MRI Reconstruction

1 Like

That looks like an awesome package!

There are a few demos of StructuredOptimization doing similar things! Most of them deal with compressed sensing.

2 Likes

I’ve actually played around with StructuredOptimization today, I’m solving a robust PCA problem, for which I have a custom solver that works quite well. I would like to experiment with variations of this problem, but am not getting StructuredOptimization to converge unless I have a really small problem size. The sizes below is borderline working. Maybe you have some idea on how to choose the solver and params for this kind of problem?

using TotalLeastSquares, Random, DSP

## Generate chirp signal
T = 2^14
e = randn(T)
fs = 8_000
t = range(0,step=1/fs, length=T)
chirp_f = LinRange(2000, 2200, T)
y = sin.(2pi .* chirp_f .* t )
yn = y + e
H = hankel(yn, 100)
@time TotalLeastSquares.rpca(H, tol=1e-3)
# about 6 seconds


using StructuredOptimization
λ = 1/sqrt(maximum(H))
L = Variable(size(H)...)
S = Variable(size(H)...)
@time @minimize ls(L+S-H) + λ*norm(S,1) st rank(L)<=4 with PANOC(tol=1e-2)
# > 1 minute with 60GiB allocations
2 Likes

Concerning speed, rank takes a truncated SVD. This is not done in place at the moment. I’m not sure if this is the bottleneck compared to TotalLeastSquares which seems to be exploiting the matrix Hankel structure as well. But surely the SVD is expensive since is taken at each iteration.

One thing you could try to avoid svd is to solve this instead:

using StructuredOptimization
n = 10
H = randn(n,n)
r = 4
U = Variable(randn(n,r))
Q = Variable(randn(r,n))
S = Variable(n,n)
lambda = 1/sqrt(maximum(H))
@minimize ls(U*Q+S-H) + lambda*norm(S,1)

Of course you can add some constraints/regularizations on U and Q.

About parameter tuning, I’m not that experienced about this problem. I’ve only worked on it with the demo and hand-tuned. For sure if you choose a rank that is too low and lambda too large you won’t be able to properly minimize the least squares term.

1 Like

rpca used in the example uses the NuclearNorm proximal operator so it also performs an SVD in each step. Is it possible to use rank(L) as a reglarization term or it can only be used as constraint in StructuredOptimization?

@time @minimize ls(L+S-H) + λ*norm(S,1) + rank(L)

throws an error about no prox available for Rank.

It is possible, the notation is the following:

using StructuredOptimization
n = 10
H = randn(n,n)
lambda1, lambdan = 1/sqrt(maximum(H)), 1e-3
L, S = Variable(size(H)...),  Variable(size(H)...)
@minimize ls(L+S-H) + lambda1*norm(S,1) + lambdan*norm(L,*)
2 Likes

@baggepinnen for clarity, note that norm(L, *) will result in a nuclear norm penalty, not rank (which is maybe what you originally wanted)

3 Likes

Sorry I was not being entirely clear, I was thinking about the nuclear norm of course :smiley:

1 Like

Hi,

Sometimes (when re-evaluating q = randperm(N)) the reconstructed signal has a very good match and sometimes it is like the attached. Is this expected?

Screen Shot 2021-09-26 at 13.31.34