ترغب بنشر مسار تعليمي؟ اضغط هنا

Quantum reservoir computation utilising scale-free networks

169   0   0.0 ( 0 )
 نشر من قبل Akitada Sakurai
 تاريخ النشر 2021
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

Todays quantum processors composed of fifty or more qubits have allowed us to enter a computational era where the output results are not easily simulatable on the worlds biggest supercomputers. What we have not seen yet, however, is whether or not such quantum complexity can be ever useful for any practical applications. A fundamental question behind this lies in the non-trivial relation between the complexity and its computational power. If we find a clue for how and what quantum complexity could boost the computational power, we might be able to directly utilize the quantum complexity to design quantum computation even with the presence of noise and errors. In this work we introduce a new reservoir computational model for pattern recognition showing a quantum advantage utilizing scale-free networks. This new scheme allows us to utilize the complexity inherent in the scale-free networks, meaning we do not require programing nor optimization of the quantum layer even for other computational tasks. The simplicity in our approach illustrates the computational power in quantum complexity as well as provide new applications for such processors.


قيم البحث

اقرأ أيضاً

Self-similarity is a property of fractal structures, a concept introduced by Mandelbrot and one of the fundamental mathematical results of the 20th century. The importance of fractal geometry stems from the fact that these structures were recognized in numerous examples in Nature, from the coexistence of liquid/gas at the critical point of evaporation of water, to snowflakes, to the tortuous coastline of the Norwegian fjords, to the behavior of many complex systems such as economic data, or the complex patterns of human agglomeration. Here we review the recent advances in self-similarity of complex networks and its relation to transport, diffusion, percolations and other topological properties such us degree distribution, modularity, and degree-degree correlations.
Efficient quantum state measurement is important for maximizing the extracted information from a quantum system. For multi-qubit quantum processors in particular, the development of a scalable architecture for rapid and high-fidelity readout remains a critical unresolved problem. Here we propose reservoir computing as a resource-efficient solution to quantum measurement of superconducting multi-qubit systems. We consider a small network of Josephson parametric oscillators, which can be implemented with minimal device overhead and in the same platform as the measured quantum system. We theoretically analyze the operation of this Kerr network as a reservoir computer to classify stochastic time-dependent signals subject to quantum statistical features. We apply this reservoir computer to the task of multinomial classification of measurement trajectories from joint multi-qubit readout. For a two-qubit dispersive measurement under realistic conditions we demonstrate a classification fidelity reliably exceeding that of an optimal linear filter using only two to five reservoir nodes, while simultaneously requiring far less calibration data textendash{} as little as a single measurement per state. We understand this remarkable performance through an analysis of the network dynamics and develop an intuitive picture of reservoir processing generally. Finally, we demonstrate how to operate this device to perform two-qubit state tomography and continuous parity monitoring with equal effectiveness and ease of calibration. This reservoir processor avoids computationally intensive training common to other deep learning frameworks and can be implemented as an integrated cryogenic superconducting device for low-latency processing of quantum signals on the computational edge.
Realizing the promise of quantum information processing remains a daunting task, given the omnipresence of noise and error. Adapting noise-resilient classical computing modalities to quantum mechanics may be a viable path towards near-term applicatio ns in the noisy intermediate-scale quantum era. Here, we propose continuous variable quantum reservoir computing in a single nonlinear oscillator. Through numerical simulation of our model we demonstrate quantum-classical performance improvement, and identify its likely source: the nonlinearity of quantum measurement. Beyond quantum reservoir computing, this result may impact the interpretation of results across quantum machine learning. We study how the performance of our quantum reservoir depends on Hilbert space dimension, how it is impacted by injected noise, and briefly comment on its experimental implementation. Our results show that quantum reservoir computing in a single nonlinear oscillator is an attractive modality for quantum computing on near-term hardware.
The nascent computational paradigm of quantum reservoir computing presents an attractive use of near-term, noisy-intermediate-scale quantum processors. To understand the potential power and use cases of quantum reservoir computing, it is necessary to define a conceptual framework to separate its constituent components and determine their impacts on performance. In this manuscript, we utilize such a framework to isolate the input encoding component of contemporary quantum reservoir computing schemes. We find that across the majority of schemes the input encoding implements a nonlinear transformation on the input data. As nonlinearity is known to be a key computational resource in reservoir computing, this calls into question the necessity and function of further, post-input, processing. Our findings will impact the design of future quantum reservoirs, as well as the interpretation of results and fair comparison between proposed designs.
Recent studies introduced biased (degree-dependent) edge percolation as a model for failures in real-life systems. In this work, such process is applied to networks consisting of two types of nodes with edges running only between nodes of unlike type . Such bipartite graphs appear in many social networks, for instance in affiliation networks and in sexual contact networks in which both types of nodes show the scale-free characteristic for the degree distribution. During the depreciation process, an edge between nodes with degrees k and q is retained with probability proportional to (kq)^(-alpha), where alpha is positive so that links between hubs are more prone to failure. The removal process is studied analytically by introducing a generating functions theory. We deduce exact self-consistent equations describing the system at a macroscopic level and discuss the percolation transition. Critical exponents are obtained by exploiting the Fortuin-Kasteleyn construction which provides a link between our model and a limit of the Potts model.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا