Nullspace property

In compressed sensing, the nullspace property gives necessary and sufficient conditions on the reconstruction of sparse signals using the techniques of 1 {\displaystyle \ell _{1}} -relaxation. The term "nullspace property" originates from Cohen, Dahmen, and DeVore.[1] The nullspace property is often difficult to check in practice, and the restricted isometry property is a more modern condition in the field of compressed sensing.

The technique of 1 {\displaystyle \ell _{1}} -relaxation

The non-convex 0 {\displaystyle \ell _{0}} -minimization problem,

min x x 0 {\displaystyle \min \limits _{x}\|x\|_{0}} subject to A x = b {\displaystyle Ax=b} ,

is a standard problem in compressed sensing. However, 0 {\displaystyle \ell _{0}} -minimization is known to be NP-hard in general.[2] As such, the technique of 1 {\displaystyle \ell _{1}} -relaxation is sometimes employed to circumvent the difficulties of signal reconstruction using the 0 {\displaystyle \ell _{0}} -norm. In 1 {\displaystyle \ell _{1}} -relaxation, the 1 {\displaystyle \ell _{1}} problem,

min x x 1 {\displaystyle \min \limits _{x}\|x\|_{1}} subject to A x = b {\displaystyle Ax=b} ,

is solved in place of the 0 {\displaystyle \ell _{0}} problem. Note that this relaxation is convex and hence amenable to the standard techniques of linear programming - a computationally desirable feature. Naturally we wish to know when 1 {\displaystyle \ell _{1}} -relaxation will give the same answer as the 0 {\displaystyle \ell _{0}} problem. The nullspace property is one way to guarantee agreement.

Definition

An m × n {\displaystyle m\times n} complex matrix A {\displaystyle A} has the nullspace property of order s {\displaystyle s} , if for all index sets S {\displaystyle S} with s = | S | n {\displaystyle s=|S|\leq n} we have that: η S 1 < η S C 1 {\displaystyle \|\eta _{S}\|_{1}<\|\eta _{S^{C}}\|_{1}} for all η ker A { 0 } {\displaystyle \eta \in \ker {A}\setminus \left\{0\right\}} .

Recovery Condition

The following theorem gives necessary and sufficient condition on the recoverability of a given s {\displaystyle s} -sparse vector in C n {\displaystyle \mathbb {C} ^{n}} . The proof of the theorem is a standard one, and the proof supplied here is summarized from Holger Rauhut.[3]

Theorem: {\displaystyle {\textbf {Theorem:}}} Let A {\displaystyle A} be a m × n {\displaystyle m\times n} complex matrix. Then every s {\displaystyle s} -sparse signal x C n {\displaystyle x\in \mathbb {C} ^{n}} is the unique solution to the 1 {\displaystyle \ell _{1}} -relaxation problem with b = A x {\displaystyle b=Ax} if and only if A {\displaystyle A} satisfies the nullspace property with order s {\displaystyle s} .

Proof: {\displaystyle {\textit {Proof:}}} For the forwards direction notice that η S {\displaystyle \eta _{S}} and η S C {\displaystyle -\eta _{S^{C}}} are distinct vectors with A ( η S C ) = A ( η S ) {\displaystyle A(-\eta _{S^{C}})=A(\eta _{S})} by the linearity of A {\displaystyle A} , and hence by uniqueness we must have η S 1 < η S C 1 {\displaystyle \|\eta _{S}\|_{1}<\|\eta _{S^{C}}\|_{1}} as desired. For the backwards direction, let x {\displaystyle x} be s {\displaystyle s} -sparse and z {\displaystyle z} another (not necessary s {\displaystyle s} -sparse) vector such that z x {\displaystyle z\neq x} and A z = A x {\displaystyle Az=Ax} . Define the (non-zero) vector η = x z {\displaystyle \eta =x-z} and notice that it lies in the nullspace of A {\displaystyle A} . Call S {\displaystyle S} the support of x {\displaystyle x} , and then the result follows from an elementary application of the triangle inequality: x 1 x z S 1 + z S 1 = η S 1 + z S 1 < η S C 1 + z S 1 = z S C 1 + z S 1 = z 1 {\displaystyle \|x\|_{1}\leq \|x-z_{S}\|_{1}+\|z_{S}\|_{1}=\|\eta _{S}\|_{1}+\|z_{S}\|_{1}<\|\eta _{S^{C}}\|_{1}+\|z_{S}\|_{1}=\|-z_{S^{C}}\|_{1}+\|z_{S}\|_{1}=\|z\|_{1}} , establishing the minimality of x {\displaystyle x} . {\displaystyle \square }

References

  1. ^ Cohen, Albert; Dahmen, Wolfgang; DeVore, Ronald (2009-01-01). "Compressed sensing and best 𝑘-term approximation". Journal of the American Mathematical Society. 22 (1): 211–231. doi:10.1090/S0894-0347-08-00610-3. ISSN 0894-0347.
  2. ^ Natarajan, B. K. (1995-04-01). "Sparse Approximate Solutions to Linear Systems". SIAM J. Comput. 24 (2): 227–234. doi:10.1137/S0097539792240406. ISSN 0097-5397. S2CID 2072045.
  3. ^ Rauhut, Holger (2011). Compressive Sensing and Structured Random Matrices. CiteSeerX 10.1.1.185.3754.