Do you want to publish a course? Click here

Quantum Kitchen Sinks: An algorithm for machine learning on near-term quantum computers

139   0   0.0 ( 0 )
 Added by Marcus Silva
 Publication date 2018
  fields Physics
and research's language is English
 Authors C. M. Wilson




Ask ChatGPT about the research

Noisy intermediate-scale quantum computing devices are an exciting platform for the exploration of the power of near-term quantum applications. Performing nontrivial tasks in such devices requires a fundamentally different approach than what would be used on an error-corrected quantum computer. One such approach is to use hybrid algorithms, where problems are reduced to a parameterized quantum circuit that is often optimized in a classical feedback loop. Here we describe one such hybrid algorithm for machine learning tasks by building upon the classical algorithm known as random kitchen sinks. Our technique, called quantum kitchen sinks, uses quantum circuits to nonlinearly transform classical inputs into features that can then be used in a number of machine learning algorithms. We demonstrate the power and flexibility of this proposal by using it to solve binary classification problems for synthetic datasets as well as handwritten digits from the MNIST database. Using the Rigetti quantum virtual machine, we show that small quantum circuits provide significant performance lift over standard linear classical algorithms, reducing classification error rates from 50% to $<0.1%$, and from $4.1%$ to $1.4%$ in these two examples, respectively. Further, we are able to run the MNIST classification problem, using full-sized MNIST images, on a Rigetti quantum processing unit, finding a modest performance lift over the linear baseline.



rate research

Read More

With quantum computing technologies nearing the era of commercialization and quantum supremacy, machine learning (ML) appears as one of the promising killer applications. Despite significant effort, there has been a disconnect between most quantum ML proposals, the needs of ML practitioners, and the capabilities of near-term quantum devices to demonstrate quantum enhancement in the near future. In this contribution to the focus collection on What would you do with 1000 qubits?, we provide concrete examples of intractable ML tasks that could be enhanced with near-term devices. We argue that to reach this target, the focus should be on areas where ML researchers are struggling, such as generative models in unsupervised and semi-supervised learning, instead of the popular and more tractable supervised learning techniques. We also highlight the case of classical datasets with potential quantum-like statistical correlations where quantum models could be more suitable. We focus on hybrid quantum-classical approaches and illustrate some of the key challenges we foresee for near-term implementations. Finally, we introduce the quantum-assisted Helmholtz machine (QAHM), an attempt to use near-term quantum devices to tackle high-dimensional datasets of continuous variables. Instead of using quantum computers to assist deep learning, as previous approaches do, the QAHM uses deep learning to extract a low-dimensional binary representation of data, suitable for relatively small quantum processors which can assist the training of an unsupervised generative model. Although we illustrate this concept on a quantum annealer, other quantum platforms could benefit as well from this hybrid quantum-classical framework.
Noisy, intermediate-scale quantum (NISQ) computing devices have become an industrial reality in the last few years, and cloud-based interfaces to these devices are enabling exploration of near-term quantum computing on a range of problems. As NISQ devices are too noisy for many of the algorithms with a known quantum advantage, discovering impactful applications for near-term devices is the subject of intense research interest. We explore quantum-assisted machine learning (QAML) on NISQ devices through the perspective of tensor networks (TNs), which offer a robust platform for designing resource-efficient and expressive machine learning models to be dispatched on quantum devices. In particular, we lay out a framework for designing and optimizing TN-based QAML models using classical techniques, and then compiling these models to be run on quantum hardware, with demonstrations for generative matrix product state (MPS) models. We put forth a generalized canonical form for MPS models that aids in compilation to quantum devices, and demonstrate greedy heuristics for compiling with a given topology and gate set that outperforms known generic methods in terms of the number of entangling gates, e.g., CNOTs, in some cases by an order of magnitude. We present an exactly solvable benchmark problem for assessing the performance of MPS QAML models, and also present an application for the canonical MNIST handwritten digit dataset. The impacts of hardware topology and day-to-day experimental noise fluctuations on model performance are explored by analyzing both raw experimental counts and statistical divergences of inferred distributions. We also present parametric studies of depolarization and readout noise impacts on model performance using hardware simulators.
Even the most sophisticated artificial neural networks are built by aggregating substantially identical units called neurons. A neuron receives multiple signals, internally combines them, and applies a non-linear function to the resulting weighted sum. Several attempts to generalize neurons to the quantum regime have been proposed, but all proposals collided with the difficulty of implementing non-linear activation functions, which is essential for classical neurons, due to the linear nature of quantum mechanics. Here we propose a solution to this roadblock in the form of a small quantum circuit that naturally simulates neurons with threshold activation. Our quantum circuit defines a building block, the quantum neuron, that can reproduce a variety of classical neural network constructions while maintaining the ability to process superpositions of inputs and preserve quantum coherence and entanglement. In the construction of feedforward networks of quantum neurons, we provide numerical evidence that the network not only can learn a function when trained with superposition of inputs and the corresponding output, but that this training suffices to learn the function on all individual inputs separately. When arranged to mimic Hopfield networks, quantum neural networks exhibit properties of associative memory. Patterns are encoded using the simple Hebbian rule for the weights and we demonstrate attractor dynamics from corrupted inputs. Finally, the fact that our quantum model closely captures (traditional) neural network dynamics implies that the vast body of literature and results on neural networks becomes directly relevant in the context of quantum machine learning.
We present a quantum chemistry benchmark for noisy intermediate-scale quantum computers that leverages the variational quantum eigensolver, active space reduction, a reduced unitary coupled cluster ansatz, and reduced density purification as error mitigation. We demonstrate this benchmark on the 20 qubit IBM Tokyo and 16 qubit Rigetti Aspen processors via the simulation of alkali metal hydrides (NaH, KH, RbH),with accuracy of the computed ground state energy serving as the primary benchmark metric. We further parameterize this benchmark suite on the trial circuit type, the level of symmetry reduction, and error mitigation strategies. Our results demonstrate the characteristically high noise level present in near-term superconducting hardware, but provide a relevant baseline for future improvement of the underlying hardware, and a means for comparison across near-term hardware types. We also demonstrate how to reduce the noise in post processing with specific error mitigation techniques. Particularly, the adaptation of McWeeny purification of noisy density matrices dramatically improves accuracy of quantum computations, which, along with adjustable active space, significantly extends the range of accessible molecular systems. We demonstrate that for specific benchmark settings, the accuracy metric can reach chemical accuracy when computing over the cloud on certain quantum computers.
Readout errors are a significant source of noise for near term quantum computers. A variety of methods have been proposed to mitigate these errors using classical post processing. For a system with $n$ qubits, the entire readout error profile is specified by a $2^ntimes 2^n$ matrix. Recent proposals to use sub-exponential approximations rely on small and/or short-ranged error correlations. In this paper, we introduce and demonstrate a methodology to categorize and quantify multiqubit readout error correlations. Two distinct types of error correlations are considered: sensitivity of the measurement of a given qubit to the state of nearby spectator qubits, and measurement operator covariances. We deploy this methodology on IBMQ quantum computers, finding that error correlations are indeed small compared to the single-qubit readout errors on IBMQ Melbourne (15 qubits) and IBMQ Manhattan (65 qubits), but that correlations on IBMQ Melbourne are long-ranged and do not decay with inter-qubit distance.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا