Do you want to publish a course? Click here

Nonlinear input transformations are ubiquitous in quantum reservoir computing

67   0   0.0 ( 0 )
 Added by Luke Govia
 Publication date 2021
  fields Physics
and research's language is English




Ask ChatGPT about the research

The nascent computational paradigm of quantum reservoir computing presents an attractive use of near-term, noisy-intermediate-scale quantum processors. To understand the potential power and use cases of quantum reservoir computing, it is necessary to define a conceptual framework to separate its constituent components and determine their impacts on performance. In this manuscript, we utilize such a framework to isolate the input encoding component of contemporary quantum reservoir computing schemes. We find that across the majority of schemes the input encoding implements a nonlinear transformation on the input data. As nonlinearity is known to be a key computational resource in reservoir computing, this calls into question the necessity and function of further, post-input, processing. Our findings will impact the design of future quantum reservoirs, as well as the interpretation of results and fair comparison between proposed designs.

rate research

Read More

Realizing the promise of quantum information processing remains a daunting task, given the omnipresence of noise and error. Adapting noise-resilient classical computing modalities to quantum mechanics may be a viable path towards near-term applications in the noisy intermediate-scale quantum era. Here, we propose continuous variable quantum reservoir computing in a single nonlinear oscillator. Through numerical simulation of our model we demonstrate quantum-classical performance improvement, and identify its likely source: the nonlinearity of quantum measurement. Beyond quantum reservoir computing, this result may impact the interpretation of results across quantum machine learning. We study how the performance of our quantum reservoir depends on Hilbert space dimension, how it is impacted by injected noise, and briefly comment on its experimental implementation. Our results show that quantum reservoir computing in a single nonlinear oscillator is an attractive modality for quantum computing on near-term hardware.
Efficient quantum state measurement is important for maximizing the extracted information from a quantum system. For multi-qubit quantum processors in particular, the development of a scalable architecture for rapid and high-fidelity readout remains a critical unresolved problem. Here we propose reservoir computing as a resource-efficient solution to quantum measurement of superconducting multi-qubit systems. We consider a small network of Josephson parametric oscillators, which can be implemented with minimal device overhead and in the same platform as the measured quantum system. We theoretically analyze the operation of this Kerr network as a reservoir computer to classify stochastic time-dependent signals subject to quantum statistical features. We apply this reservoir computer to the task of multinomial classification of measurement trajectories from joint multi-qubit readout. For a two-qubit dispersive measurement under realistic conditions we demonstrate a classification fidelity reliably exceeding that of an optimal linear filter using only two to five reservoir nodes, while simultaneously requiring far less calibration data textendash{} as little as a single measurement per state. We understand this remarkable performance through an analysis of the network dynamics and develop an intuitive picture of reservoir processing generally. Finally, we demonstrate how to operate this device to perform two-qubit state tomography and continuous parity monitoring with equal effectiveness and ease of calibration. This reservoir processor avoids computationally intensive training common to other deep learning frameworks and can be implemented as an integrated cryogenic superconducting device for low-latency processing of quantum signals on the computational edge.
Todays quantum processors composed of fifty or more qubits have allowed us to enter a computational era where the output results are not easily simulatable on the worlds biggest supercomputers. What we have not seen yet, however, is whether or not such quantum complexity can be ever useful for any practical applications. A fundamental question behind this lies in the non-trivial relation between the complexity and its computational power. If we find a clue for how and what quantum complexity could boost the computational power, we might be able to directly utilize the quantum complexity to design quantum computation even with the presence of noise and errors. In this work we introduce a new reservoir computational model for pattern recognition showing a quantum advantage utilizing scale-free networks. This new scheme allows us to utilize the complexity inherent in the scale-free networks, meaning we do not require programing nor optimization of the quantum layer even for other computational tasks. The simplicity in our approach illustrates the computational power in quantum complexity as well as provide new applications for such processors.
Closed quantum systems exhibit different dynamical regimes, like Many-Body Localization or thermalization, which determine the mechanisms of spread and processing of information. Here we address the impact of these dynamical phases in quantum reservoir computing, an unconventional computing paradigm recently extended into the quantum regime that exploits dynamical systems to solve nonlinear and temporal tasks. We establish that the thermal phase is naturally adapted to the requirements of quantum reservoir computing and report an increased performance at the thermalization transition for the studied tasks. Uncovering the underlying physical mechanisms behind optimal information processing capabilities of spin networks is essential for future experimental implementations and provides a new perspective on dynamical phases.
Quantum neuromorphic computing physically implements neural networks in brain-inspired quantum hardware to speed up their computation. In this perspective article, we show that this emerging paradigm could make the best use of the existing and near future intermediate size quantum computers. Some approaches are based on parametrized quantum circuits, and use neural network-inspired algorithms to train them. Other approaches, closer to classical neuromorphic computing, take advantage of the physical properties of quantum oscillator assemblies to mimic neurons and compute. We discuss the different implementations of quantum neuromorphic networks with digital and analog circuits, highlight their respective advantages, and review exciting recent experimental results.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا