ترغب بنشر مسار تعليمي؟ اضغط هنا

Magnetic resonance $T_2^*$ mapping and quantitative susceptibility mapping (QSM) provide direct and precise mappings of tissue contrasts. They are widely used to study iron deposition, hemorrhage and calcification in various clinical applications. In practice, the measurements can be undersampled in the $k$-space to reduce the scan time needed for high-resolution 3D maps, and sparse prior on the wavelet coefficients of images can be used to fill in the missing information via compressive sensing. To avoid the extensive parameter tuning process of conventional regularization methods, we adopt a Bayesian approach to perform $T_2^*$ mapping and QSM using approximate message passing (AMP): the sparse prior is enforced through probability distributions, and the distribution parameters can be automatically and adaptively estimated. In this paper we propose a new nonlinear AMP framework that incorporates the mono-exponential decay model, and use it to recover the proton density, the $T_2^*$ map and complex multi-echo images. The QSM can be computed from the multi-echo images subsequently. Experimental results show that the proposed approach successfully recovers $T_2^*$ map and QSM across various sampling rates, and performs much better than the state-of-the-art $l_1$-norm regularization approach.
Magnetic resonance (MR)-$T_2^*$ mapping is widely used to study hemorrhage, calcification and iron deposition in various clinical applications, it provides a direct and precise mapping of desired contrast in the tissue. However, the long acquisition time required by conventional 3D high-resolution $T_2^*$ mapping method causes discomfort to patients and introduces motion artifacts to reconstructed images, which limits its wider applicability. In this paper we address this issue by performing $T_2^*$ mapping from undersampled data using compressive sensing (CS). We formulate the reconstruction as a nonconvex problem that can be decomposed into two subproblems. They can be solved either separately via the standard approach or jointly via the alternating direction method of multipliers (ADMM). Compared to previous CS-based approaches that only apply sparse regularization on the spin density $boldsymbol X_0$ and the relaxation rate $boldsymbol R_2^*$, our formulation enforces additional sparse priors on the $T_2^*$-weighted images at multiple echoes to improve the reconstruction performance. We performed convergence analysis of the proposed algorithm, evaluated its performance on in vivo data, and studied the effects of different sampling schemes. Experimental results showed that the proposed joint-recovery approach generally outperforms the state-of-the-art method, especially in the low-sampling rate regime, making it a preferred choice to perform fast 3D $T_2^*$ mapping in practice. The framework adopted in this work can be easily extended to other problems arising from MR or other imaging modalities with non-linearly coupled variables.
In order to reduce hardware complexity and power consumption, massive multiple-input multiple-output (MIMO) systems employ low-resolution analog-to-digital converters (ADCs) to acquire quantized measurements $boldsymbol y$. This poses new challenges to the channel estimation problem, and the sparse prior on the channel coefficient vector $boldsymbol x$ in the angle domain is often used to compensate for the information lost during quantization. By interpreting the sparse prior from a probabilistic perspective, we can assume $boldsymbol x$ follows certain sparse prior distribution and recover it using approximate message passing (AMP). However, the distribution parameters are unknown in practice and need to be estimated. Due to the increased computational complexity in the quantization noise model, previous works either use an approximated noise model or manually tune the noise distribution parameters. In this paper, we treat both signals and parameters as random variables and recover them jointly within the AMP framework. The proposed approach leads to a much simpler parameter estimation method, allowing us to work with the quantization noise model directly. Experimental results show that the proposed approach achieves state-of-the-art performance under various noise levels and does not require parameter tuning, making it a practical and maintenance-free approach for channel estimation.
207 - Shuai Huang , Trac D. Tran 2020
1-bit compressive sensing aims to recover sparse signals from quantized 1-bit measurements. Designing efficient approaches that could handle noisy 1-bit measurements is important in a variety of applications. In this paper we use the approximate mess age passing (AMP) to achieve this goal due to its high computational efficiency and state-of-the-art performance. In AMP the signal of interest is assumed to follow some prior distribution, and its posterior distribution can be computed and used to recover the signal. In practice, the parameters of the prior distributions are often unknown and need to be estimated. Previous works tried to find the parameters that maximize either the measurement likelihood or the Bethe free entropy, which becomes increasingly difficult to solve in the case of complicated probability models. Here we propose to treat the parameters as unknown variables and compute their posteriors via AMP as well, so that the parameters and the signal can be recovered jointly. This leads to a much simpler way to perform parameter estimation compared to previous methods and enables us to work with noisy 1-bit measurements. We further extend the proposed approach to the general quantization noise model that outputs multi-bit measurements. Experimental results show that the proposed approach generally perform much better than the other state-of-the-art methods in the zero-noise and moderate-noise regimes, and outperforms them in most of the cases in the high-noise regime.
We tackle the problem of recovering a complex signal $boldsymbol xinmathbb{C}^n$ from quadratic measurements of the form $y_i=boldsymbol x^*boldsymbol A_iboldsymbol x$, where $boldsymbol A_i$ is a full-rank, complex random measurement matrix whose en tries are generated from a rotation-invariant sub-Gaussian distribution. We formulate it as the minimization of a nonconvex loss. This problem is related to the well understood phase retrieval problem where the measurement matrix is a rank-1 positive semidefinite matrix. Here we study the general full-rank case which models a number of key applications such as molecular geometry recovery from distance distributions and compound measurements in phaseless diffractive imaging. Most prior works either address the rank-1 case or focus on real measurements. The several papers that address the full-rank complex case adopt the computationally-demanding semidefinite relaxation approach. In this paper we prove that the general class of problems with rotation-invariant sub-Gaussian measurement models can be efficiently solved with high probability via the standard framework comprising a spectral initialization followed by iterative Wirtinger flow updates on a nonconvex loss. Numerical experiments on simulated data corroborate our theoretical analysis.
161 - Shuai Huang , Ivan Dokmanic 2018
We address the problem of reconstructing a set of points on a line or a loop from their unassigned noisy pairwise distances. When the points lie on a line, the problem is known as the turnpike; when they are on a loop, it is known as the beltway. We approximate the problem by discretizing the domain and representing the $N$ points via an $N$-hot encoding, which is a density supported on the discretized domain. We show how the distance distribution is then simply a collection of quadratic functionals of this density and propose to recover the point locations so that the estimated distance distribution matches the measured distance distribution. This can be cast as a constrained nonconvex optimization problem which we solve using projected gradient descent with a suitable spectral initializer. We derive conditions under which the proposed distance distribution matching approach locally converges to a global optimizer at a linear rate. Compared to the conventional backtracking approach, our method jointly reconstructs all the point locations and is robust to noise in the measurements. We substantiate these claims with state-of-the-art performance across a number of numerical experiments. Our method is the first practical approach to solve the large-scale noisy beltway problem where the points lie on a loop.
Compressive sensing relies on the sparse prior imposed on the signal of interest to solve the ill-posed recovery problem in an under-determined linear system. The objective function used to enforce the sparse prior information should be both effectiv e and easily optimizable. Motivated by the entropy concept from information theory, in this paper we propose the generalized Shannon entropy function and R{e}nyi entropy function of the signal as the sparsity promoting regularizers. Both entropy functions are nonconvex, non-separable. Their local minimums only occur on the boundaries of the orthants in the Euclidean space. Compared to other popular objective functions, minimizing the generalized entropy functions adaptively promotes multiple high-energy coefficients while suppressing the rest low-energy coefficients. The corresponding optimization problems can be recasted into a series of reweighted $l_1$-norm minimization problems and then solved efficiently by adapting the FISTA. Sparse signal recovery experiments on both the simulated and real data show the proposed entropy functions minimization approaches perform better than other popular approaches and achieve state-of-the-art performances.
The generalized approximate message passing (GAMP) algorithm under the Bayesian setting shows advantage in recovering under-sampled sparse signals from corrupted observations. Compared to conventional convex optimization methods, it has a much lower complexity and is computationally tractable. In the GAMP framework, the sparse signal and the observation are viewed to be generated according to some pre-specified probability distributions in the input and output channels. However, the parameters of the distributions are usually unknown in practice. In this paper, we propose an extended GAMP algorithm with built-in parameter estimation (PE-GAMP) and present its empirical convergence analysis. PE-GAMP treats the parameters as unknown random variables with simple priors and jointly estimates them with the sparse signals. Compared with Expectation Maximization (EM) based parameter estimation methods, the proposed PE-GAMP could draw information from the prior distributions of the parameters to perform parameter estimation. It is also more robust and much simpler, which enables us to consider more complex signal distributions apart from the usual Bernoulli-Gaussian (BGm) mixture distribution. Specifically, the formulations of Bernoulli-Exponential mixture (BEm) distribution and Laplace distribution are given in this paper. Simulated noiseless sparse signal recovery experiments demonstrate that the performance of the proposed PE-GAMP matches the oracle GAMP algorithm. When noise is present, both the simulated experiments and the real image recovery experiments show that PE-GAMP is still able to maintain its robustness and outperform EM based parameter estimation method when the sampling ratio is small. Additionally, using the BEm formulation of the PE-GAMP, we can successfully perform non-negative sparse coding of local image patches and provide useful features for the image classification task.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا