ترغب بنشر مسار تعليمي؟ اضغط هنا

The landscape of the spiked tensor model

81   0   0.0 ( 0 )
 نشر من قبل Andrea Montanari
 تاريخ النشر 2017
  مجال البحث
والبحث باللغة English




اسأل ChatGPT حول البحث

We consider the problem of estimating a large rank-one tensor ${boldsymbol u}^{otimes k}in({mathbb R}^{n})^{otimes k}$, $kge 3$ in Gaussian noise. Earlier work characterized a critical signal-to-noise ratio $lambda_{Bayes}= O(1)$ above which an ideal estimator achieves strictly positive correlation with the unknown vector of interest. Remarkably no polynomial-time algorithm is known that achieved this goal unless $lambdage C n^{(k-2)/4}$ and even powerful semidefinite programming relaxations appear to fail for $1ll lambdall n^{(k-2)/4}$. In order to elucidate this behavior, we consider the maximum likelihood estimator, which requires maximizing a degree-$k$ homogeneous polynomial over the unit sphere in $n$ dimensions. We compute the expected number of critical points and local maxima of this objective function and show that it is exponential in the dimensions $n$, and give exact formulas for the exponential growth rate. We show that (for $lambda$ larger than a constant) critical points are either very close to the unknown vector ${boldsymbol u}$, or are confined in a band of width $Theta(lambda^{-1/(k-1)})$ around the maximum circle that is orthogonal to ${boldsymbol u}$. For local maxima, this band shrinks to be of size $Theta(lambda^{-1/(k-2)})$. These `uninformative local maxima are likely to cause the failure of optimization algorithms.



قيم البحث

اقرأ أيضاً

In this paper, we study the asymptotic behavior of the extreme eigenvalues and eigenvectors of the spiked covariance matrices, in the supercritical regime. Specifically, we derive the joint distribution of the extreme eigenvalues and the generalized components of their associated eigenvectors in this regime.
Using a low-dimensional parametrization of signals is a generic and powerful way to enhance performance in signal processing and statistical inference. A very popular and widely explored type of dimensionality reduction is sparsity; another type is g enerative modelling of signal distributions. Generative models based on neural networks, such as GANs or variational auto-encoders, are particularly performant and are gaining on applicability. In this paper we study spiked matrix models, where a low-rank matrix is observed through a noisy channel. This problem with sparse structure of the spikes has attracted broad attention in the past literature. Here, we replace the sparsity assumption by generative modelling, and investigate the consequences on statistical and algorithmic properties. We analyze the Bayes-optimal performance under specific generative models for the spike. In contrast with the sparsity assumption, we do not observe regions of parameters where statistical performance is superior to the best known algorithmic performance. We show that in the analyzed cases the approximate message passing algorithm is able to reach optimal performance. We also design enhanced spectral algorithms and analyze their performance and thresholds using random matrix theory, showing their superiority to the classical principal component analysis. We complement our theoretical results by illustrating the performance of the spectral algorithms when the spikes come from real datasets.
164 - Chunlin Wang 2008
In this paper, we study the asymptotic normality of the conditional maximum likelihood (ML) estimators for the truncated regression model and the Tobit model. We show that under the general setting assumed in his book, the conjectures made by Hayashi (2000) footnote{see page 516, and page 520 of Hayashi (2000).} about the asymptotic normality of the conditional ML estimators for both models are true, namely, a sufficient condition is the nonsingularity of $mathbf{x_tx_t}$.
We study the fundamental limits of detecting the presence of an additive rank-one perturbation, or spike, to a Wigner matrix. When the spike comes from a prior that is i.i.d. across coordinates, we prove that the log-likelihood ratio of the spiked mo del against the non-spiked one is asymptotically normal below a certain reconstruction threshold which is not necessarily of a spectral nature, and that it is degenerate above. This establishes the maximal region of contiguity between the planted and null models. It is known that this threshold also marks a phase transition for estimating the spike: the latter task is possible above the threshold and impossible below. Therefore, both estimation and detection undergo the same transition in this random matrix model. We also provide further information about the performance of the optimal test. Our proofs are based on Gaussian interpolation methods and a rigorous incarnation of the cavity method, as devised by Guerra and Talagrand in their study of the Sherrington--Kirkpatrick spin-glass model.
In this paper, we study the power iteration algorithm for the spiked tensor model, as introduced in [44]. We give necessary and sufficient conditions for the convergence of the power iteration algorithm. When the power iteration algorithm converges, for the rank one spiked tensor model, we show the estimators for the spike strength and linear functionals of the signal are asymptotically Gaussian; for the multi-rank spiked tensor model, we show the estimators are asymptotically mixtures of Gaussian. This new phenomenon is different from the spiked matrix model. Using these asymptotic results of our estimators, we construct valid and efficient confidence intervals for spike strengths and linear functionals of the signals.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا