ترغب بنشر مسار تعليمي؟ اضغط هنا

Fundamental Limits of Weak Recovery with Applications to Phase Retrieval

78   0   0.0 ( 0 )
 نشر من قبل Marco Mondelli
 تاريخ النشر 2017
والبحث باللغة English




اسأل ChatGPT حول البحث

In phase retrieval we want to recover an unknown signal $boldsymbol xinmathbb C^d$ from $n$ quadratic measurements of the form $y_i = |langle{boldsymbol a}_i,{boldsymbol x}rangle|^2+w_i$ where $boldsymbol a_iin mathbb C^d$ are known sensing vectors and $w_i$ is measurement noise. We ask the following weak recovery question: what is the minimum number of measurements $n$ needed to produce an estimator $hat{boldsymbol x}(boldsymbol y)$ that is positively correlated with the signal $boldsymbol x$? We consider the case of Gaussian vectors $boldsymbol a_i$. We prove that - in the high-dimensional limit - a sharp phase transition takes place, and we locate the threshold in the regime of vanishingly small noise. For $nle d-o(d)$ no estimator can do significantly better than random and achieve a strictly positive correlation. For $nge d+o(d)$ a simple spectral estimator achieves a positive correlation. Surprisingly, numerical simulations with the same spectral estimator demonstrate promising performance with realistic sensing matrices. Spectral methods are used to initialize non-convex optimization algorithms in phase retrieval, and our approach can boost the performance in this setting as well. Our impossibility result is based on classical information-theory arguments. The spectral algorithm computes the leading eigenvector of a weighted empirical covariance matrix. We obtain a sharp characterization of the spectral properties of this random matrix using tools from free probability and generalizing a recent result by Lu and Li. Both the upper and lower bound generalize beyond phase retrieval to measurements $y_i$ produced according to a generalized linear model. As a byproduct of our analysis, we compare the threshold of the proposed spectral method with that of a message passing algorithm.

قيم البحث

اقرأ أيضاً

Graph convolutional networks (GCNs) are a widely used method for graph representation learning. To elucidate the capabilities and limitations of GCNs, we investigate their power, as a function of their number of layers, to distinguish between differe nt random graph models (corresponding to different class-conditional distributions in a classification problem) on the basis of the embeddings of their sample graphs. In particular, the graph models that we consider arise from graphons, which are the most general possible parameterizations of infinite exchangeable graph models and which are the central objects of study in the theory of dense graph limits. We give a precise characterization of the set of pairs of graphons that are indistinguishable by a GCN with nonlinear activation functions coming from a certain broad class if its depth is at least logarithmic in the size of the sample graph. This characterization is in terms of a degree profile closeness property. Outside this class, a very simple GCN architecture suffices for distinguishability. We then exhibit a concrete, infinite class of graphons arising from stochastic block models that are well-separated in terms of cut distance and are indistinguishable by a GCN. These results theoretically match empirical observations of several prior works. To prove our results, we exploit a connection to random walks on graphs. Finally, we give empirical results on synthetic and real graph classification datasets, indicating that indistinguishable graph distributions arise in practice.
Phase retrieval (PR) is an important component in modern computational imaging systems. Many algorithms have been developed over the past half century. Recent advances in deep learning have opened up a new possibility for robust and fast PR. An emerg ing technique, called deep unfolding, provides a systematic connection between conventional model-based iterative algorithms and modern data-based deep learning. Unfolded algorithms, powered by data learning, have shown remarkable performance and convergence speed improvement over the original algorithms. Despite their potential, most existing unfolded algorithms are strictly confined to a fixed number of iterations when employing layer-dependent parameters. In this study, we develop a novel framework for deep unfolding to overcome the existing limitations. Even if our framework can be widely applied to general inverse problems, we take PR as an example in the paper. Our development is based on an unfolded generalized expectation consistent signal recovery (GEC-SR) algorithm, wherein damping factors are left for data-driven learning. In particular, we introduce a hypernetwork to generate the damping factors for GEC-SR. Instead of directly learning a set of optimal damping factors, the hypernetwork learns how to generate the optimal damping factors according to the clinical settings, thus ensuring its adaptivity to different scenarios. To make the hypernetwork work adapt to varying layer numbers, we use a recurrent architecture to develop a dynamic hypernetwork, which generates a damping factor that can vary online across layers. We also exploit a self-attention mechanism to enhance the robustness of the hypernetwork. Extensive experiments show that the proposed algorithm outperforms existing ones in convergence speed and accuracy, and still works well under very harsh settings, that many classical PR algorithms unstable or even fail.
We present the optimal design of a spectral method widely used to initialize nonconvex optimization algorithms for solving phase retrieval and other signal recovery problems. Our work leverages recent results that provide an exact characterization of the performance of the spectral method in the high-dimensional limit. This characterization allows us to map the task of optimal design to a constrained optimization problem in a weighted $L^2$ function space. The latter has a closed-form solution. Interestingly, under a mild technical condition, our results show that there exists a fixed design that is uniformly optimal over all sampling ratios. Numerical simulations demonstrate the performance improvement brought by the proposed optimal design over existing constructions in the literature. In a recent work, Mondelli and Montanari have shown the existence of a weak reconstruction threshold below which the spectral method cannot provide useful estimates. Our results serve to complement that work by deriving the fundamental limit of the spectral method beyond the aforementioned threshold.
Phase retrieval deals with the estimation of complex-valued signals solely from the magnitudes of linear measurements. While there has been a recent explosion in the development of phase retrieval algorithms, the lack of a common interface has made i t difficult to compare new methods against the state-of-the-art. The purpose of PhasePack is to create a common software interface for a wide range of phase retrieval algorithms and to provide a common testbed using both synthetic data and empirical imaging datasets. PhasePack is able to benchmark a large number of recent phase retrieval methods against one another to generate comparisons using a range of different performance metrics. The software package handles single method testing as well as multiple method comparisons. The algorithm implementations in PhasePack differ slightly from their original descriptions in the literature in order to achieve faster speed and improved robustness. In particular, PhasePack uses adaptive stepsizes, line-search methods, and fast eigensolvers to speed up and automate convergence.
We study the problem of recovering a hidden community of cardinality $K$ from an $n times n$ symmetric data matrix $A$, where for distinct indices $i,j$, $A_{ij} sim P$ if $i, j$ both belong to the community and $A_{ij} sim Q$ otherwise, for two know n probability distributions $P$ and $Q$ depending on $n$. If $P={rm Bern}(p)$ and $Q={rm Bern}(q)$ with $p>q$, it reduces to the problem of finding a densely-connected $K$-subgraph planted in a large Erdos-Renyi graph; if $P=mathcal{N}(mu,1)$ and $Q=mathcal{N}(0,1)$ with $mu>0$, it corresponds to the problem of locating a $K times K$ principal submatrix of elevated means in a large Gaussian random matrix. We focus on two types of asymptotic recovery guarantees as $n to infty$: (1) weak recovery: expected number of classification errors is $o(K)$; (2) exact recovery: probability of classifying all indices correctly converges to one. Under mild assumptions on $P$ and $Q$, and allowing the community size to scale sublinearly with $n$, we derive a set of sufficient conditions and a set of necessary conditions for recovery, which are asymptotically tight with sharp constants. The results hold in particular for the Gaussian case, and for the case of bounded log likelihood ratio, including the Bernoulli case whenever $frac{p}{q}$ and $frac{1-p}{1-q}$ are bounded away from zero and infinity. An important algorithmic implication is that, whenever exact recovery is information theoretically possible, any algorithm that provides weak recovery when the community size is concentrated near $K$ can be upgraded to achieve exact recovery in linear additional time by a simple voting procedure.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا