ترغب بنشر مسار تعليمي؟ اضغط هنا

Low Rank Approximation in Simulations of Quantum Algorithms

377   0   0.0 ( 0 )
 نشر من قبل Linjian Ma
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Simulating quantum algorithms on classical computers is challenging when the system size, i.e., the number of qubits used in the quantum algorithm, is moderately large. However, some quantum algorithms and the corresponding quantum circuits can be simulated efficiently on a classical computer if the input quantum state is a low-rank tensor and all intermediate states of the quantum algorithm can be represented or approximated by low-rank tensors. In this paper, we examine the possibility of simulating a few quantum algorithms by using low-rank canonical polyadic (CP) decomposition to represent the input and all intermediate states of these algorithms. Two rank reduction algorithms are used to enable efficient simulation. We show that some of the algorithms preserve the low-rank structure of the input state and can thus be efficiently simulated on a classical computer. However, the rank of the intermediate states in other quantum algorithms can increase rapidly, making efficient simulation more difficult. To some extent, such difficulty reflects the advantage or superiority of a quantum computer over a classical computer. As a result, understanding the low-rank structure of a quantum algorithm allows us to identify algorithms that can benefit significantly from quantum computers.



قيم البحث

اقرأ أيضاً

For the high dimensional data representation, nonnegative tensor ring (NTR) decomposition equipped with manifold learning has become a promising model to exploit the multi-dimensional structure and extract the feature from tensor data. However, the e xisting methods such as graph regularized tensor ring decomposition (GNTR) only models the pair-wise similarities of objects. For tensor data with complex manifold structure, the graph can not exactly construct similarity relationships. In this paper, in order to effectively utilize the higher-dimensional and complicated similarities among objects, we introduce hypergraph to the framework of NTR to further enhance the feature extraction, upon which a hypergraph regularized nonnegative tensor ring decomposition (HGNTR) method is developed. To reduce the computational complexity and suppress the noise, we apply the low-rank approximation trick to accelerate HGNTR (called LraHGNTR). Our experimental results show that compared with other state-of-the-art algorithms, the proposed HGNTR and LraHGNTR can achieve higher performance in clustering tasks, in addition, LraHGNTR can greatly reduce running time without decreasing accuracy.
97 - Shi Jin , Xiantao Li 2020
Random batch algorithms are constructed for quantum Monte Carlo simulations. The main objective is to alleviate the computational cost associated with the calculations of two-body interactions, including the pairwise interactions in the potential ene rgy, and the two-body terms in the Jastrow factor. In the framework of variational Monte Carlo methods, the random batch algorithm is constructed based on the over-damped Langevin dynamics, so that updating the position of each particle in an $N$-particle system only requires $mathcal{O}(1)$ operations, thus for each time step the computational cost for $N$ particles is reduced from $mathcal{O}(N^2)$ to $mathcal{O}(N)$. For diffusion Monte Carlo methods, the random batch algorithm uses an energy decomposition to avoid the computation of the total energy in the branching step. The effectiveness of the random batch method is demonstrated using a system of liquid ${}^4$He atoms interacting with a graphite surface.
We consider the problem of computing the rank of an m x n matrix A over a field. We present a randomized algorithm to find a set of r = rank(A) linearly independent columns in ~O(|A| + r^omega) field operations, where |A| denotes the number of nonzer o entries in A and omega < 2.38 is the matrix multiplication exponent. Previously the best known algorithm to find a set of r linearly independent columns is by Gaussian elimination, with running time O(mnr^{omega-2}). Our algorithm is faster when r < max(m,n), for instance when the matrix is rectangular. We also consider the problem of computing the rank of a matrix dynamically, supporting the operations of rank one updates and additions and deletions of rows and columns. We present an algorithm that updates the rank in ~O(mn) field operations. We show that these algorithms can be used to obtain faster algorithms for various problems in numerical linear algebra, combinatorial optimization and dynamic data structure.
81 - Yuning Yang 2019
The epsilon alternating least squares ($epsilon$-ALS) is developed and analyzed for canonical polyadic decomposition (approximation) of a higher-order tensor where one or more of the factor matrices are assumed to be columnwisely orthonormal. It is s hown that the algorithm globally converges to a KKT point for all tensors without any assumption. For the original ALS, by further studying the properties of the polar decomposition, we also establish its global convergence under a reality assumption not stronger than those in the literature. These results completely address a question concerning the global convergence raised in [L. Wang, M. T. Chu and B. Yu, emph{SIAM J. Matrix Anal. Appl.}, 36 (2015), pp. 1--19]. In addition, an initialization procedure is proposed, which possesses a provable lower bound when the number of columnwisely orthonormal factors is one. Armed with this initialization procedure, numerical experiments show that the $epsilon$-ALS exhibits a promising performance in terms of efficiency and effectiveness.
One of the major challenges for low-rank multi-fidelity (MF) approaches is the assumption that low-fidelity (LF) and high-fidelity (HF) models admit similar low-rank kernel representations. Low-rank MF methods have traditionally attempted to exploit low-rank representations of linear kernels, which are kernel functions of the form $K(u,v) = v^T u$ for vectors $u$ and $v$. However, such linear kernels may not be able to capture low-rank behavior, and they may admit LF and HF kernels that are not similar. Such a situation renders a naive approach to low-rank MF procedures ineffective. In this paper, we propose a novel approach for the selection of a near-optimal kernel function for use in low-rank MF methods. The proposed framework is a two-step strategy wherein: (1) hyperparameters of a library of kernel functions are optimized, and (2) a particular combination of the optimized kernels is selected, through either a convex mixture (Additive Kernels) or through a data-driven optimization (Adaptive Kernels). The two resulting methods for this generalized framework both utilize only the available inexpensive low-fidelity data and thus no evaluation of high-fidelity simulation model is needed until a kernel is chosen. These proposed approaches are tested on five non-trivial problems including multi-fidelity surrogate modeling for one- and two-species molecular systems, gravitational many-body problem, associating polymer networks, plasmonic nano-particle arrays, and an incompressible flow in channels with stenosis. The results for these numerical experiments demonstrate the numerical stability efficiency of both proposed kernel function selection procedures, as well as high accuracy of their resultant predictive models for estimation of quantities of interest. Comparisons against standard linear kernel procedures also demonstrate increased accuracy of the optimized kernel approaches.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا