ترغب بنشر مسار تعليمي؟ اضغط هنا

Tight Query Complexity Lower Bounds for PCA via Finite Sample Deformed Wigner Law

59   0   0.0 ( 0 )
 نشر من قبل Max Simchowitz
 تاريخ النشر 2018
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

We prove a emph{query complexity} lower bound for approximating the top $r$ dimensional eigenspace of a matrix. We consider an oracle model where, given a symmetric matrix $mathbf{M} in mathbb{R}^{d times d}$, an algorithm $mathsf{Alg}$ is allowed to make $mathsf{T}$ exact queries of the form $mathsf{w}^{(i)} = mathbf{M} mathsf{v}^{(i)}$ for $i$ in ${1,...,mathsf{T}}$, where $mathsf{v}^{(i)}$ is drawn from a distribution which depends arbitrarily on the past queries and measurements ${mathsf{v}^{(j)},mathsf{w}^{(i)}}_{1 le j le i-1}$. We show that for every $mathtt{gap} in (0,1/2]$, there exists a distribution over matrices $mathbf{M}$ for which 1) $mathrm{gap}_r(mathbf{M}) = Omega(mathtt{gap})$ (where $mathrm{gap}_r(mathbf{M})$ is the normalized gap between the $r$ and $r+1$-st largest-magnitude eigenvector of $mathbf{M}$), and 2) any algorithm $mathsf{Alg}$ which takes fewer than $mathrm{const} times frac{r log d}{sqrt{mathtt{gap}}}$ queries fails (with overwhelming probability) to identity a matrix $widehat{mathsf{V}} in mathbb{R}^{d times r}$ with orthonormal columns for which $langle widehat{mathsf{V}}, mathbf{M} widehat{mathsf{V}}rangle ge (1 - mathrm{const} times mathtt{gap})sum_{i=1}^r lambda_i(mathbf{M})$. Our bound requires only that $d$ is a small polynomial in $1/mathtt{gap}$ and $r$, and matches the upper bounds of Musco and Musco 15. Moreover, it establishes a strict separation between convex optimization and emph{randomized}, strict-saddle non-convex optimization of which PCA is a canonical example: in the former, first-order methods can have dimension-free iteration complexity, whereas in PCA, the iteration complexity of gradient-based methods must necessarily grow with the dimension.



قيم البحث

اقرأ أيضاً

68 - Peter Hoyer 2005
Shors and Grovers famous quantum algorithms for factoring and searching show that quantum computers can solve certain computational problems significantly faster than any classical computer. We discuss here what quantum computers_cannot_ do, and spec ifically how to prove limits on their computational power. We cover the main known techniques for proving lower bounds, and exemplify and compare the methods.
We consider an online binary prediction setting where a forecaster observes a sequence of $T$ bits one by one. Before each bit is revealed, the forecaster predicts the probability that the bit is $1$. The forecaster is called well-calibrated if for e ach $p in [0, 1]$, among the $n_p$ bits for which the forecaster predicts probability $p$, the actual number of ones, $m_p$, is indeed equal to $p cdot n_p$. The calibration error, defined as $sum_p |m_p - p n_p|$, quantifies the extent to which the forecaster deviates from being well-calibrated. It has long been known that an $O(T^{2/3})$ calibration error is achievable even when the bits are chosen adversarially, and possibly based on the previous predictions. However, little is known on the lower bound side, except an $Omega(sqrt{T})$ bound that follows from the trivial example of independent fair coin flips. In this paper, we prove an $Omega(T^{0.528})$ bound on the calibration error, which is the first super-$sqrt{T}$ lower bound for this setting to the best of our knowledge. The technical contributions of our work include two lower bound techniques, early stopping and sidestepping, which circumvent the obstacles that have previously hindered strong calibration lower bounds. We also propose an abstraction of the prediction setting, termed the Sign-Preservation game, which may be of independent interest. This game has a much smaller state space than the full prediction setting and allows simpler analyses. The $Omega(T^{0.528})$ lower bound follows from a general reduction theorem that translates lower bounds on the game value of Sign-Preservation into lower bounds on the calibration error.
341 - Lijie Chen , Jian Li , Mingda Qiao 2017
In the Best-$k$-Arm problem, we are given $n$ stochastic bandit arms, each associated with an unknown reward distribution. We are required to identify the $k$ arms with the largest means by taking as few samples as possible. In this paper, we make pr ogress towards a complete characterization of the instance-wise sample complexity bounds for the Best-$k$-Arm problem. On the lower bound side, we obtain a novel complexity term to measure the sample complexity that every Best-$k$-Arm instance requires. This is derived by an interesting and nontrivial reduction from the Best-$1$-Arm problem. We also provide an elimination-based algorithm that matches the instance-wise lower bound within doubly-logarithmic factors. The sample complexity of our algorithm strictly dominates the state-of-the-art for Best-$k$-Arm (module constant factors).
Suppose, we are given a set of $n$ elements to be clustered into $k$ (unknown) clusters, and an oracle/expert labeler that can interactively answer pair-wise queries of the form, do two elements $u$ and $v$ belong to the same cluster?. The goal is to recover the optimum clustering by asking the minimum number of queries. In this paper, we initiate a rigorous theoretical study of this basic problem of query complexity of interactive clustering, and provide strong information theoretic lower bounds, as well as nearly matching upper bounds. Most clustering problems come with a similarity matrix, which is used by an automated process to cluster similar points together. Our main contribution in this paper is to show the dramatic power of side information aka similarity matrix on reducing the query complexity of clustering. A similarity matrix represents noisy pair-wise relationships such as one computed by some function on attributes of the elements. A natural noisy model is where similarity values are drawn independently from some arbitrary probability distribution $f_+$ when the underlying pair of elements belong to the same cluster, and from some $f_-$ otherwise. We show that given such a similarity matrix, the query complexity reduces drastically from $Theta(nk)$ (no similarity matrix) to $O(frac{k^2log{n}}{cH^2(f_+|f_-)})$ where $cH^2$ denotes the squared Hellinger divergence. Moreover, this is also information-theoretic optimal within an $O(log{n})$ factor. Our algorithms are all efficient, and parameter free, i.e., they work without any knowledge of $k, f_+$ and $f_-$, and only depend logarithmically with $n$. Along the way, our work also reveals intriguing connection to popular community detection models such as the {em stochastic block model}, significantly generalizes them, and opens up many venues for interesting future research.
Nearly a decade ago, Azrieli and Shmaya introduced the class of $lambda$-Lipschitz games in which every players payoff function is $lambda$-Lipschitz with respect to the actions of the other players. They showed that such games admit $epsilon$-approx imate pure Nash equilibria for certain settings of $epsilon$ and $lambda$. They left open, however, the question of how hard it is to find such an equilibrium. In this work, we develop a query-efficient reduction from more general games to Lipschitz games. We use this reduction to show a query lower bound for any randomized algorithm finding $epsilon$-approximate pure Nash equilibria of $n$-player, binary-action, $lambda$-Lipschitz games that is exponential in $frac{nlambda}{epsilon}$. In addition, we introduce ``Multi-Lipschitz games, a generalization involving player-specific Lipschitz values, and provide a reduction from finding equilibria of these games to finding equilibria of Lipschitz games, showing that the value of interest is the sum of the individual Lipschitz parameters. Finally, we provide an exponential lower bound on the deterministic query complexity of finding $epsilon$-approximate correlated equilibria of $n$-player, $m$-action, $lambda$-Lipschitz games for strong values of $epsilon$, motivating the consideration of explicitly randomized algorithms in the above results. Our proof is arguably simpler than those previously used to show similar results.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا