Do you want to publish a course? Click here

Solving Complex Quadratic Systems with Full-Rank Random Matrices

227   0   0.0 ( 0 )
 Added by Shuai Huang
 Publication date 2019
and research's language is English




Ask ChatGPT about the research

We tackle the problem of recovering a complex signal $boldsymbol xinmathbb{C}^n$ from quadratic measurements of the form $y_i=boldsymbol x^*boldsymbol A_iboldsymbol x$, where $boldsymbol A_i$ is a full-rank, complex random measurement matrix whose entries are generated from a rotation-invariant sub-Gaussian distribution. We formulate it as the minimization of a nonconvex loss. This problem is related to the well understood phase retrieval problem where the measurement matrix is a rank-1 positive semidefinite matrix. Here we study the general full-rank case which models a number of key applications such as molecular geometry recovery from distance distributions and compound measurements in phaseless diffractive imaging. Most prior works either address the rank-1 case or focus on real measurements. The several papers that address the full-rank complex case adopt the computationally-demanding semidefinite relaxation approach. In this paper we prove that the general class of problems with rotation-invariant sub-Gaussian measurement models can be efficiently solved with high probability via the standard framework comprising a spectral initialization followed by iterative Wirtinger flow updates on a nonconvex loss. Numerical experiments on simulated data corroborate our theoretical analysis.



rate research

Read More

In this paper, we propose a new algorithm for recovery of low-rank matrices from compressed linear measurements. The underlying idea of this algorithm is to closely approximate the rank function with a smooth function of singular values, and then minimize the resulting approximation subject to the linear constraints. The accuracy of the approximation is controlled via a scaling parameter $delta$, where a smaller $delta$ corresponds to a more accurate fitting. The consequent optimization problem for any finite $delta$ is nonconvex. Therefore, in order to decrease the risk of ending up in local minima, a series of optimizations is performed, starting with optimizing a rough approximation (a large $delta$) and followed by successively optimizing finer approximations of the rank with smaller $delta$s. To solve the optimization problem for any $delta > 0$, it is converted to a new program in which the cost is a function of two auxiliary positive semidefinete variables. The paper shows that this new program is concave and applies a majorize-minimize technique to solve it which, in turn, leads to a few convex optimization iterations. This optimization scheme is also equivalent to a reweighted Nuclear Norm Minimization (NNM), where weighting update depends on the used approximating function. For any $delta > 0$, we derive a necessary and sufficient condition for the exact recovery which are weaker than those corresponding to NNM. On the numerical side, the proposed algorithm is compared to NNM and a reweighted NNM in solving affine rank minimization and matrix completion problems showing its considerable and consistent superiority in terms of success rate, especially, when the number of measurements decreases toward the lower-bound for the unique representation.
In this paper, the problem of matrix rank minimization under affine constraints is addressed. The state-of-the-art algorithms can recover matrices with a rank much less than what is sufficient for the uniqueness of the solution of this optimization problem. We propose an algorithm based on a smooth approximation of the rank function, which practically improves recovery limits on the rank of the solution. This approximation leads to a non-convex program; thus, to avoid getting trapped in local solutions, we use the following scheme. Initially, a rough approximation of the rank function subject to the affine constraints is optimized. As the algorithm proceeds, finer approximations of the rank are optimized and the solver is initialized with the solution of the previous approximation until reaching the desired accuracy. On the theoretical side, benefiting from the spherical section property, we will show that the sequence of the solutions of the approximating function converges to the minimum rank solution. On the experimental side, it will be shown that the proposed algorithm, termed SRF standing for Smoothed Rank Function, can recover matrices which are unique solutions of the rank minimization problem and yet not recoverable by nuclear norm minimization. Furthermore, it will be demonstrated that, in completing partially observed matrices, the accuracy of SRF is considerably and consistently better than some famous algorithms when the number of revealed entries is close to the minimum number of parameters that uniquely represent a low-rank matrix.
This paper is focuses on the computation of the positive moments of one-side correlated random Gram matrices. Closed-form expressions for the moments can be obtained easily, but numerical evaluation thereof is prone to numerical stability, especially in high-dimensional settings. This letter provides a numerically stable method that efficiently computes the positive moments in closed-form. The developed expressions are more accurate and can lead to higher accuracy levels when fed to moment based-approaches. As an application, we show how the obtained moments can be used to approximate the marginal distribution of the eigenvalues of random Gram matrices.
In this paper we study the spectrum of certain large random Hermitian Jacobi matrices. These matrices are known to describe certain communication setups. In particular we are interested in an uplink cellular channel which models mobile users experiencing a soft-handoff situation under joint multicell decoding. Considering rather general fading statistics we provide a closed form expression for the per-cell sum-rate of this channel in high-SNR, when an intra-cell TDMA protocol is employed. Since the matrices of interest are tridiagonal, their eigenvectors can be considered as sequences with second order linear recurrence. Therefore, the problem is reduced to the study of the exponential growth of products of two by two matrices. For the case where $K$ users are simultaneously active in each cell, we obtain a series of lower and upper bound on the high-SNR power offset of the per-cell sum-rate, which are considerably tighter than previously known bounds.
This paper considers a multipair amplify-and-forward massive MIMO relaying system with low-resolution ADCs at both the relay and destinations. The channel state information (CSI) at the relay is obtained via pilot training, which is then utilized to perform simple maximum-ratio combining/maximum-ratio transmission processing by the relay. Also, it is assumed that the destinations use statistical CSI to decode the transmitted signals. Exact and approximated closed-form expressions for the achievable sum rate are presented, which enable the efficient evaluation of the impact of key system parameters on the system performance. In addition, optimal relay power allocation scheme is studied, and power scaling law is characterized. It is found that, with only low-resolution ADCs at the relay, increasing the number of relay antennas is an effective method to compensate for the rate loss caused by coarse quantization. However, it becomes ineffective to handle the detrimental effect of low-resolution ADCs at the destination. Moreover, it is shown that deploying massive relay antenna arrays can still bring significant power savings, i.e., the transmit power of each source can be cut down proportional to $1/M$ to maintain a constant rate, where $M$ is the number of relay antennas.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا