ترغب بنشر مسار تعليمي؟ اضغط هنا

Quantum-enhanced least-square support vector machine: simplified quantum algorithm and sparse solutions

73   0   0.0 ( 0 )
 نشر من قبل Jie Lin
 تاريخ النشر 2019
والبحث باللغة English




اسأل ChatGPT حول البحث

Quantum algorithms can enhance machine learning in different aspects. Here, we study quantum-enhanced least-square support vector machine (LS-SVM). Firstly, a novel quantum algorithm that uses continuous variable to assist matrix inversion is introduced to simplify the algorithm for quantum LS-SVM, while retaining exponential speed-up. Secondly, we propose a hybrid quantum-classical version for sparse solutions of LS-SVM. By encoding a large dataset into a quantum state, a much smaller transformed dataset can be extracted using quantum matrix toolbox, which is further processed in classical SVM. We also incorporate kernel methods into the above quantum algorithms, which uses both exponential growth Hilbert space of qubits and infinite dimensionality of continuous variable for quantum feature maps. The quantum LS-SVM exploits quantum properties to explore important themes for SVM such as sparsity and kernel methods, and stresses its quantum advantages ranging from speed-up to the potential capacity to solve classically difficult machine learning tasks.



قيم البحث

اقرأ أيضاً

The twin support vector machine and its extensions have made great achievements in dealing with binary classification problems, however, which is faced with some difficulties such as model selection and solving multi-classification problems quickly. This paper is devoted to the fast regularization parameter tuning algorithm for the twin multi-class support vector machine. A new sample dataset division method is adopted and the Lagrangian multipliers are proved to be piecewise linear with respect to the regularization parameters by combining the linear equations and block matrix theory. Eight kinds of events are defined to seek for the starting event and then the solution path algorithm is designed, which greatly reduces the computational cost. In addition, only few points are combined to complete the initialization and Lagrangian multipliers are proved to be 1 as the regularization parameter tends to infinity. Simulation results based on UCI datasets show that the proposed method can achieve good classification performance with reducing the computational cost of grid search method from exponential level to the constant level.
We study the quantum synchronization between a pair of two-level systems inside two coupled cavities. By using a digital-analog decomposition of the master equation that rules the system dynamics, we show that this approach leads to quantum synchroni zation between both two-level systems. Moreover, we can identify in this digital-analog block decomposition the fundamental elements of a quantum machine learning protocol, in which the agent and the environment (learning units) interact through a mediating system, namely, the register. If we can additionally equip this algorithm with a classical feedback mechanism, which consists of projective measurements in the register, reinitialization of the register state and local conditional operations on the agent and environment subspace, a powerful and flexible quantum machine learning protocol emerges. Indeed, numerical simulations show that this protocol enhances the synchronization process, even when every subsystem experience different loss/decoherence mechanisms, and give us the flexibility to choose the synchronization state. Finally, we propose an implementation based on current technologies in superconducting circuits.
We consider gradient descent like algorithms for Support Vector Machine (SVM) training when the data is in relational form. The gradient of the SVM objective can not be efficiently computed by known techniques as it suffers from the ``subtraction pro blem. We first show that the subtraction problem can not be surmounted by showing that computing any constant approximation of the gradient of the SVM objective function is $#P$-hard, even for acyclic joins. We, however, circumvent the subtraction problem by restricting our attention to stable instances, which intuitively are instances where a nearly optimal solution remains nearly optimal if the points are perturbed slightly. We give an efficient algorithm that computes a ``pseudo-gradient that guarantees convergence for stable instances at a rate comparable to that achieved by using the actual gradient. We believe that our results suggest that this sort of stability the analysis would likely yield useful insight in the context of designing algorithms on relational data for other learning problems in which the subtraction problem arises.
We present a quantum algorithm for simulating the dynamics of Hamiltonians that are not necessarily sparse. Our algorithm is based on the input model where the entries of the Hamiltonian are stored in a data structure in a quantum random access memor y (qRAM) which allows for the efficient preparation of states that encode the rows of the Hamiltonian. We use a linear combination of quantum walks to achieve poly-logarithmic dependence on precision. The time complexity of our algorithm, measured in terms of the circuit depth, is $O(tsqrt{N}|H|,mathrm{polylog}(N, t|H|, 1/epsilon))$, where $t$ is the evolution time, $N$ is the dimension of the system, and $epsilon$ is the error in the final state, which we call precision. Our algorithm can be directly applied as a subroutine for unitary implementation and quantum linear systems solvers, achieving $widetilde{O}(sqrt{N})$ dependence for both applications.
In 1998, Brassard, Hoyer, Mosca, and Tapp (BHMT) gave a quantum algorithm for approximate counting. Given a list of $N$ items, $K$ of them marked, their algorithm estimates $K$ to within relative error $varepsilon$ by making only $Oleft( frac{1}{vare psilon}sqrt{frac{N}{K}}right) $ queries. Although this speedup is of Grover type, the BHMT algorithm has the curious feature of relying on the Quantum Fourier Transform (QFT), more commonly associated with Shors algorithm. Is this necessary? This paper presents a simplified algorithm, which we prove achieves the same query complexity using Grover iterations only. We also generalize this to a QFT-free algorithm for amplitude estimation. Related approaches to approximate counting were sketched previously by Grover, Abrams and Williams, Suzuki et al., and Wie (the latter two as we were writing this paper), but in all cases without rigorous analysis.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا