ترغب بنشر مسار تعليمي؟ اضغط هنا

Compressed Sensing of Multi-Channel EEG Signals: The Simultaneous Cosparsity and Low Rank Optimization

392   0   0.0 ( 0 )
 نشر من قبل Yipeng Liu Dr.
 تاريخ النشر 2015
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Goal: This paper deals with the problems that some EEG signals have no good sparse representation and single channel processing is not computationally efficient in compressed sensing of multi-channel EEG signals. Methods: An optimization model with L0 norm and Schatten-0 norm is proposed to enforce cosparsity and low rank structures in the reconstructed multi-channel EEG signals. Both convex relaxation and global consensus optimization with alternating direction method of multipliers are used to compute the optimization model. Results: The performance of multi-channel EEG signal reconstruction is improved in term of both accuracy and computational complexity. Conclusion: The proposed method is a better candidate than previous sparse signal recovery methods for compressed sensing of EEG signals. Significance: The proposed method enables successful compressed sensing of EEG signals even when the signals have no good sparse representation. Using compressed sensing would much reduce the power consumption of wireless EEG system.



قيم البحث

اقرأ أيضاً

This paper addresses compressive sensing for multi-channel ECG. Compared to the traditional sparse signal recovery approach which decomposes the signal into the product of a dictionary and a sparse vector, the recently developed cosparse approach exp loits sparsity of the product of an analysis matrix and the original signal. We apply the cosparse Greedy Analysis Pursuit (GAP) algorithm for compressive sensing of ECG signals. Moreover, to reduce processing time, classical signal-channel GAP is generalized to the multi-channel GAP algorithm, which simultaneously reconstructs multiple signals with similar support. Numerical experiments show that the proposed method outperforms the classical sparse multi-channel greedy algorithms in terms of accuracy and the single-channel cosparse approach in terms of processing speed.
206 - Xiaochen Zhao , Wei Dai 2014
This paper studies the problem of power allocation in compressed sensing when different components in the unknown sparse signal have different probability to be non-zero. Given the prior information of the non-uniform sparsity and the total power bud get, we are interested in how to optimally allocate the power across the columns of a Gaussian random measurement matrix so that the mean squared reconstruction error is minimized. Based on the state evolution technique originated from the work by Donoho, Maleki, and Montanari, we revise the so called approximate message passing (AMP) algorithm for the reconstruction and quantify the MSE performance in the asymptotic regime. Then the closed form of the optimal power allocation is obtained. The results show that in the presence of measurement noise, uniform power allocation, which results in the commonly used Gaussian random matrix with i.i.d. entries, is not optimal for non-uniformly sparse signals. Empirical results are presented to demonstrate the performance gain.
Motivated by applications in unsourced random access, this paper develops a novel scheme for the problem of compressed sensing of binary signals. In this problem, the goal is to design a sensing matrix $A$ and a recovery algorithm, such that the spar se binary vector $mathbf{x}$ can be recovered reliably from the measurements $mathbf{y}=Amathbf{x}+sigmamathbf{z}$, where $mathbf{z}$ is additive white Gaussian noise. We propose to design $A$ as a parity check matrix of a low-density parity-check code (LDPC), and to recover $mathbf{x}$ from the measurements $mathbf{y}$ using a Markov chain Monte Carlo algorithm, which runs relatively fast due to the sparse structure of $A$. The performance of our scheme is comparable to state-of-the-art schemes, which use dense sensing matrices, while enjoying the advantages of using a sparse sensing matrix.
Xampling generalizes compressed sensing (CS) to reduced-rate sampling of analog signals. A unified framework is introduced for low rate sampling and processing of signals lying in a union of subspaces. Xampling consists of two main blocks: Analog com pression that narrows down the input bandwidth prior to sampling with commercial devices followed by a nonlinear algorithm that detects the input subspace prior to conventional signal processing. A variety of analog CS applications are reviewed within the unified Xampling framework including a general filter-bank scheme for sparse shift-invariant spaces, periodic nonuniform sampling and modulated wideband conversion for multiband communications with unknown carrier frequencies, acquisition techniques for finite rate of innovation signals with applications to medical and radar imaging, and random demodulation of sparse harmonic tones. A hardware-oriented viewpoint is advocated throughout, addressing practical constraints and exemplifying hardware realizations where relevant. It will appear as a chapter in a book on Compressed Sensing: Theory and Applications edited by Yonina Eldar and Gitta Kutyniok.
127 - Jared Tanner , Simon Vary 2020
Expressing a matrix as the sum of a low-rank matrix plus a sparse matrix is a flexible model capturing global and local features in data. This model is the foundation of robust principle component analysis (Candes et al., 2011) (Chandrasekaran et al. , 2009), and popularized by dynamic-foreground/static-background separation (Bouwmans et al., 2016) amongst other applications. Compressed sensing, matrix completion, and their variants (Eldar and Kutyniok, 2012) (Foucart and Rauhut, 2013) have established that data satisfying low complexity models can be efficiently measured and recovered from a number of measurements proportional to the model complexity rather than the ambient dimension. This manuscript develops similar guarantees showing that $mtimes n$ matrices that can be expressed as the sum of a rank-$r$ matrix and a $s$-sparse matrix can be recovered by computationally tractable methods from $mathcal{O}(r(m+n-r)+s)log(mn/s)$ linear measurements. More specifically, we establish that the restricted isometry constants for the aforementioned matrices remain bounded independent of problem size provided $p/mn$, $s/p$, and $r(m+n-r)/p$ reman fixed. Additionally, we show that semidefinite programming and two hard threshold gradient descent algorithms, NIHT and NAHT, converge to the measured matrix provided the measurement operators RICs are sufficiently small. Numerical experiments illustrating these results are shown for synthetic problems, dynamic-foreground/static-background separation, and multispectral imaging.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا