ترغب بنشر مسار تعليمي؟ اضغط هنا

Opportunities and challenges for quantum-assisted machine learning in near-term quantum computers

89   0   0.0 ( 0 )
 نشر من قبل Alejandro Perdomo-Ortiz
 تاريخ النشر 2017
والبحث باللغة English




اسأل ChatGPT حول البحث

With quantum computing technologies nearing the era of commercialization and quantum supremacy, machine learning (ML) appears as one of the promising killer applications. Despite significant effort, there has been a disconnect between most quantum ML proposals, the needs of ML practitioners, and the capabilities of near-term quantum devices to demonstrate quantum enhancement in the near future. In this contribution to the focus collection on What would you do with 1000 qubits?, we provide concrete examples of intractable ML tasks that could be enhanced with near-term devices. We argue that to reach this target, the focus should be on areas where ML researchers are struggling, such as generative models in unsupervised and semi-supervised learning, instead of the popular and more tractable supervised learning techniques. We also highlight the case of classical datasets with potential quantum-like statistical correlations where quantum models could be more suitable. We focus on hybrid quantum-classical approaches and illustrate some of the key challenges we foresee for near-term implementations. Finally, we introduce the quantum-assisted Helmholtz machine (QAHM), an attempt to use near-term quantum devices to tackle high-dimensional datasets of continuous variables. Instead of using quantum computers to assist deep learning, as previous approaches do, the QAHM uses deep learning to extract a low-dimensional binary representation of data, suitable for relatively small quantum processors which can assist the training of an unsupervised generative model. Although we illustrate this concept on a quantum annealer, other quantum platforms could benefit as well from this hybrid quantum-classical framework.

قيم البحث

اقرأ أيضاً

138 - C. M. Wilson 2018
Noisy intermediate-scale quantum computing devices are an exciting platform for the exploration of the power of near-term quantum applications. Performing nontrivial tasks in such devices requires a fundamentally different approach than what would be used on an error-corrected quantum computer. One such approach is to use hybrid algorithms, where problems are reduced to a parameterized quantum circuit that is often optimized in a classical feedback loop. Here we describe one such hybrid algorithm for machine learning tasks by building upon the classical algorithm known as random kitchen sinks. Our technique, called quantum kitchen sinks, uses quantum circuits to nonlinearly transform classical inputs into features that can then be used in a number of machine learning algorithms. We demonstrate the power and flexibility of this proposal by using it to solve binary classification problems for synthetic datasets as well as handwritten digits from the MNIST database. Using the Rigetti quantum virtual machine, we show that small quantum circuits provide significant performance lift over standard linear classical algorithms, reducing classification error rates from 50% to $<0.1%$, and from $4.1%$ to $1.4%$ in these two examples, respectively. Further, we are able to run the MNIST classification problem, using full-sized MNIST images, on a Rigetti quantum processing unit, finding a modest performance lift over the linear baseline.
The concept of quantum computing has inspired a whole new generation of scientists, including physicists, engineers, and computer scientists, to fundamentally change the landscape of information technology. With experimental demonstrations stretching back more than two decades, the quantum computing community has achieved a major milestone over the past few years: the ability to build systems that are stretching the limits of what can be classically simulated, and which enable cloud-based research for a wide range of scientists, thus increasing the pool of talent exploring early quantum systems. While such noisy near-term quantum computing systems fall far short of the requirements for fault-tolerant systems, they provide unique testbeds for exploring the opportunities for quantum applications. Here we highlight the facets associated with these systems, including quantum software, cloud access, benchmarking quantum systems, error correction and mitigation in such systems, and understanding the complexity of quantum circuits and how early quantum applications can run on near term quantum computers.
Noisy, intermediate-scale quantum (NISQ) computing devices have become an industrial reality in the last few years, and cloud-based interfaces to these devices are enabling exploration of near-term quantum computing on a range of problems. As NISQ de vices are too noisy for many of the algorithms with a known quantum advantage, discovering impactful applications for near-term devices is the subject of intense research interest. We explore quantum-assisted machine learning (QAML) on NISQ devices through the perspective of tensor networks (TNs), which offer a robust platform for designing resource-efficient and expressive machine learning models to be dispatched on quantum devices. In particular, we lay out a framework for designing and optimizing TN-based QAML models using classical techniques, and then compiling these models to be run on quantum hardware, with demonstrations for generative matrix product state (MPS) models. We put forth a generalized canonical form for MPS models that aids in compilation to quantum devices, and demonstrate greedy heuristics for compiling with a given topology and gate set that outperforms known generic methods in terms of the number of entangling gates, e.g., CNOTs, in some cases by an order of magnitude. We present an exactly solvable benchmark problem for assessing the performance of MPS QAML models, and also present an application for the canonical MNIST handwritten digit dataset. The impacts of hardware topology and day-to-day experimental noise fluctuations on model performance are explored by analyzing both raw experimental counts and statistical divergences of inferred distributions. We also present parametric studies of depolarization and readout noise impacts on model performance using hardware simulators.
Quantum computers are capable of efficiently contracting unitary tensor networks, a task that is likely to remain difficult for classical computers. For instance, networks based on matrix product states or the multi-scale entanglement renormalization ansatz (MERA) can be contracted on a small quantum computer to aid the simulation of a large quantum system. However, without the ability to selectively reset qubits, the associated spatial cost can be exorbitant. In this paper, we propose a protocol that can unitarily reset qubits when the circuit has a common convolutional form, thus dramatically reducing the spatial cost for implementing the contraction algorithm on general near-term quantum computers. This protocol generates fresh qubits from used ones by partially applying the time-reversed quantum circuit over qubits that are no longer in use. In the absence of noise, we prove that the state of a subset of these qubits becomes $|0ldots 0rangle$, up to an error exponentially small in the number of gates applied. We also provide a numerical evidence that the protocol works in the presence of noise. We also provide a numerical evidence that the protocol works in the presence of noise, and formulate a condition under which the noise-resilience follows rigorously.
We present a quantum chemistry benchmark for noisy intermediate-scale quantum computers that leverages the variational quantum eigensolver, active space reduction, a reduced unitary coupled cluster ansatz, and reduced density purification as error mi tigation. We demonstrate this benchmark on the 20 qubit IBM Tokyo and 16 qubit Rigetti Aspen processors via the simulation of alkali metal hydrides (NaH, KH, RbH),with accuracy of the computed ground state energy serving as the primary benchmark metric. We further parameterize this benchmark suite on the trial circuit type, the level of symmetry reduction, and error mitigation strategies. Our results demonstrate the characteristically high noise level present in near-term superconducting hardware, but provide a relevant baseline for future improvement of the underlying hardware, and a means for comparison across near-term hardware types. We also demonstrate how to reduce the noise in post processing with specific error mitigation techniques. Particularly, the adaptation of McWeeny purification of noisy density matrices dramatically improves accuracy of quantum computations, which, along with adjustable active space, significantly extends the range of accessible molecular systems. We demonstrate that for specific benchmark settings, the accuracy metric can reach chemical accuracy when computing over the cloud on certain quantum computers.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا