Do you want to publish a course? Click here

Extreme dimensionality reduction with quantum modelling

320   0   0.0 ( 0 )
 Added by Thomas Elliott
 Publication date 2019
  fields Physics
and research's language is English




Ask ChatGPT about the research

Effective and efficient forecasting relies on identification of the relevant information contained in past observations -- the predictive features -- and isolating it from the rest. When the future of a process bears a strong dependence on its behaviour far into the past, there are many such features to store, necessitating complex models with extensive memories. Here, we highlight a family of stochastic processes whose minimal classical models must devote unboundedly many bits to tracking the past. For this family, we identify quantum models of equal accuracy that can store all relevant information within a single two-dimensional quantum system (qubit). This represents the ultimate limit of quantum compression and highlights an immense practical advantage of quantum technologies for the forecasting and simulation of complex systems.



rate research

Read More

85 - Thomas J. Elliott 2021
Stochastic modelling of complex systems plays an essential, yet often computationally intensive role across the quantitative sciences. Recent advances in quantum information processing have elucidated the potential for quantum simulators to exhibit memory advantages for such tasks. Heretofore, the focus has been on lossless memory compression, wherein the advantage is typically in terms of lessening the amount of information tracked by the model, while -- arguably more practical -- reductions in memory dimension are not always possible. Here we address the case of lossy compression for quantum stochastic modelling of continuous-time processes, introducing a method for coarse-graining in quantum state space that drastically reduces the requisite memory dimension for modelling temporal dynamics whilst retaining near-exact statistics. In contrast to classical coarse-graining, this compression is not based on sacrificing temporal resolution, and brings memory-efficient, high-fidelity stochastic modelling within reach of present quantum technologies.
A growing body of work has established the modelling of stochastic processes as a promising area of application for quantum techologies; it has been shown that quantum models are able to replicate the future statistics of a stochastic process whilst retaining less information about the past than any classical model must -- even for a purely classical process. Such memory-efficient models open a potential future route to study complex systems in greater detail than ever before, and suggest profound consequences for our notions of structure in their dynamics. Yet, to date methods for constructing these quantum models are based on having a prior knowledge of the optimal classical model. Here, we introduce a protocol for blind inference of the memory structure of quantum models -- tailored to take advantage of quantum features -- direct from time-series data, in the process highlighting the robustness of their structure to noise. This in turn provides a way to construct memory-efficient quantum models of stochastic processes whilst circumventing certain drawbacks that manifest solely as a result of classical information processing in classical inference protocols.
In this work, we present a quantum neighborhood preserving embedding and a quantum local discriminant embedding for dimensionality reduction and classification. We demonstrate that these two algorithms have an exponential speedup over their respectively classical counterparts. Along the way, we propose a variational quantum generalized eigenvalue solver that finds the generalized eigenvalues and eigenstates of a matrix pencil $(mathcal{G},mathcal{S})$. As a proof-of-principle, we implement our algorithm to solve $2^5times2^5$ generalized eigenvalue problems. Finally, our results offer two optional outputs with quantum or classical form, which can be directly applied in another quantum or classical machine learning process.
130 - Gilad Gour , Mark M. Wilde 2018
The von Neumann entropy of a quantum state is a central concept in physics and information theory, having a number of compelling physical interpretations. There is a certain perspective that the most fundamental notion in quantum mechanics is that of a quantum channel, as quantum states, unitary evolutions, measurements, and discarding of quantum systems can each be regarded as certain kinds of quantum channels. Thus, an important goal is to define a consistent and meaningful notion of the entropy of a quantum channel. Motivated by the fact that the entropy of a state $rho$ can be formulated as the difference of the number of physical qubits and the relative entropy distance between $rho$ and the maximally mixed state, here we define the entropy of a channel $mathcal{N}$ as the difference of the number of physical qubits of the channel output with the relative entropy distance between $mathcal{N}$ and the completely depolarizing channel. We prove that this definition satisfies all of the axioms, recently put forward in [Gour, IEEE Trans. Inf. Theory 65, 5880 (2019)], required for a channel entropy function. The task of quantum channel merging, in which the goal is for the receiver to merge his share of the channel with the environments share, gives a compelling operational interpretation of the entropy of a channel. The entropy of a channel can be negative for certain channels, but this negativity has an operational interpretation in terms of the channel merging protocol. We define Renyi and min-entropies of a channel and prove that they satisfy the axioms required for a channel entropy function. Among other results, we also prove that a smoothed version of the min-entropy of a channel satisfies the asymptotic equipartition property.
In stochastic modeling, there has been a significant effort towards finding predictive models that predict a stochastic process future using minimal information from its past. Meanwhile, in condensed matter physics, matrix product states (MPS) are known as a particularly efficient representation of 1D spin chains. In this Letter, we associate each stochastic process with a suitable quantum state of a spin chain. We then show that the optimal predictive model for the process leads directly to an MPS representation of the associated quantum state. Conversely, MPS methods offer a systematic construction of the best known quantum predictive models. This connection allows an improved method for computing the quantum memory needed for generating optimal predictions. We prove that this memory coincides with the entanglement of the associated spin chain across the past-future bipartition.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا