ترغب بنشر مسار تعليمي؟ اضغط هنا

Sampling-based sublinear low-rank matrix arithmetic framework for dequantizing quantum machine learning

112   0   0.0 ( 0 )
 نشر من قبل Ewin Tang
 تاريخ النشر 2019
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

We present an algorithmic framework for quantum-inspired classical algorithms on close-to-low-rank matrices, generalizing the series of results started by Tangs breakthrough quantum-inspired algorithm for recommendation systems [STOC19]. Motivated by quantum linear algebra algorithms and the quantum singular value transformation (SVT) framework of Gilyen et al. [STOC19], we develop classical algorithms for SVT that run in time independent of input dimension, under suitable quantum-inspired sampling assumptions. Our results give compelling evidence that in the corresponding QRAM data structure input model, quantum SVT does not yield exponential quantum speedups. Since the quantum SVT framework generalizes essentially all known techniques for quantum linear algebra, our results, combined with sampling lemmas from previous work, suffice to generalize all recent results about dequantizing quantum machine learning algorithms. In particular, our classical SVT framework recovers and often improves the dequantization results on recommendation systems, principal component analysis, supervised clustering, support vector machines, low-rank regression, and semidefinite program solving. We also give additional dequantization results on low-rank Hamiltonian simulation and discriminant analysis. Our improvements come from identifying the key feature of the quantum-inspired input model that is at the core of all prior quantum-inspired results: $ell^2$-norm sampling can approximate matrix products in time independent of their dimension. We reduce all our main results to this fact, making our exposition concise, self-contained, and intuitive.


قيم البحث

اقرأ أيضاً

Semidefinite programming (SDP) is a central topic in mathematical optimization with extensive studies on its efficient solvers. In this paper, we present a proof-of-principle sublinear-time algorithm for solving SDPs with low-rank constraints; specif ically, given an SDP with $m$ constraint matrices, each of dimension $n$ and rank $r$, our algorithm can compute any entry and efficient descriptions of the spectral decomposition of the solution matrix. The algorithm runs in time $O(mcdotmathrm{poly}(log n,r,1/varepsilon))$ given access to a sampling-based low-overhead data structure for the constraint matrices, where $varepsilon$ is the precision of the solution. In addition, we apply our algorithm to a quantum state learning task as an application. Technically, our approach aligns with 1) SDP solvers based on the matrix multiplicative weight (MMW) framework by Arora and Kale [TOC 12]; 2) sampling-based dequantizing framework pioneered by Tang [STOC 19]. In order to compute the matrix exponential required in the MMW framework, we introduce two new techniques that may be of independent interest: $bullet$ Weighted sampling: assuming sampling access to each individual constraint matrix $A_{1},ldots,A_{tau}$, we propose a procedure that gives a good approximation of $A=A_{1}+cdots+A_{tau}$. $bullet$ Symmetric approximation: we propose a sampling procedure that gives the emph{spectral decomposition} of a low-rank Hermitian matrix $A$. To the best of our knowledge, this is the first sampling-based algorithm for spectral decomposition, as previous works only give singular values and vectors.
We present classical sublinear-time algorithms for solving low-rank linear systems of equations. Our algorithms are inspired by the HHL quantum algorithm for solving linear systems and the recent breakthrough by Tang of dequantizing the quantum algor ithm for recommendation systems. Let $A in mathbb{C}^{m times n}$ be a rank-$k$ matrix, and $b in mathbb{C}^m$ be a vector. We present two algorithms: a sampling algorithm that provides a sample from $A^{-1}b$ and a query algorithm that outputs an estimate of an entry of $A^{-1}b$, where $A^{-1}$ denotes the Moore-Penrose pseudo-inverse. Both of our algorithms have query and time complexity $O(mathrm{poly}(k, kappa, |A|_F, 1/epsilon),mathrm{polylog}(m, n))$, where $kappa$ is the condition number of $A$ and $epsilon$ is the precision parameter. Note that the algorithms we consider are sublinear time, so they cannot write and read the whole matrix or vectors. In this paper, we assume that $A$ and $b$ come with well-known low-overhead data structures such that entries of $A$ and $b$ can be sampled according to some natural probability distributions. Alternatively, when $A$ is positive semidefinite, our algorithms can be adapted so that the sampling assumption on $b$ is not required.
Low-rank tensor decomposition generalizes low-rank matrix approximation and is a powerful technique for discovering low-dimensional structure in high-dimensional data. In this paper, we study Tucker decompositions and use tools from randomized numeri cal linear algebra called ridge leverage scores to accelerate the core tensor update step in the widely-used alternating least squares (ALS) algorithm. Updating the core tensor, a severe bottleneck in ALS, is a highly-structured ridge regression problem where the design matrix is a Kronecker product of the factor matrices. We show how to use approximate ridge leverage scores to construct a sketched instance for any ridge regression problem such that the solution vector for the sketched problem is a $(1+varepsilon)$-approximation to the original instance. Moreover, we show that classical leverage scores suffice as an approximation, which then allows us to exploit the Kronecker structure and update the core tensor in time that depends predominantly on the rank and the sketching parameters (i.e., sublinear in the size of the input tensor). We also give upper bounds for ridge leverage scores as rows are removed from the design matrix (e.g., if the tensor has missing entries), and we demonstrate the effectiveness of our approximate ridge regressioni algorithm for large, low-rank Tucker decompositions on both synthetic and real-world data.
We provide a number of algorithmic results for the following family of problems: For a given binary mtimes n matrix A and integer k, decide whether there is a simple binary matrix B which differs from A in at most k entries. For an integer r, the sim plicity of B is characterized as follows. - Binary r-Means: Matrix B has at most r different columns. This problem is known to be NP-complete already for r=2. We show that the problem is solvable in time 2^{O(klog k)}cdot(nm)^{O(1)} and thus is fixed-parameter tractable parameterized by k. We prove that the problem admits a polynomial kernel when parameterized by r and k but it has no polynomial kernel when parameterized by k only unless NPsubseteq coNP/poly. We also complement these result by showing that when being parameterized by r and k, the problem admits an algorithm of running time 2^{O(rcdot sqrt{klog{(k+r)}})}(nm)^{O(1)}, which is subexponential in k for rin O(k^{1/2 -epsilon}) for any epsilon>0. - Low GF(2)-Rank Approximation: Matrix B is of GF(2)-rank at most r. This problem is known to be NP-complete already for r=1. It also known to be W[1]-hard when parameterized by k. Interestingly, when parameterized by r and k, the problem is not only fixed-parameter tractable, but it is solvable in time 2^{O(r^{ 3/2}cdot sqrt{klog{k}})}(nm)^{O(1)}, which is subexponential in k. - Low Boolean-Rank Approximation: Matrix B is of Boolean rank at most r. The problem is known to be NP-complete for k=0 as well as for r=1. We show that it is solvable in subexponential in k time 2^{O(r2^rcdot sqrt{klog k})}(nm)^{O(1)}.
We provide a randomized linear time approximation scheme for a generic problem about clustering of binary vectors subject to additional constrains. The new constrained clustering problem encompasses a number of problems and by solving it, we obtain t he first linear time-approximation schemes for a number of well-studied fundamental problems concerning clustering of binary vectors and low-rank approximation of binary matrices. Among the problems solvable by our approach are textsc{Low GF(2)-Rank Approximation}, textsc{Low Boolean-Rank Approximation}, and vario

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا