No Arabic abstract
A growing body of work has established the modelling of stochastic processes as a promising area of application for quantum techologies; it has been shown that quantum models are able to replicate the future statistics of a stochastic process whilst retaining less information about the past than any classical model must -- even for a purely classical process. Such memory-efficient models open a potential future route to study complex systems in greater detail than ever before, and suggest profound consequences for our notions of structure in their dynamics. Yet, to date methods for constructing these quantum models are based on having a prior knowledge of the optimal classical model. Here, we introduce a protocol for blind inference of the memory structure of quantum models -- tailored to take advantage of quantum features -- direct from time-series data, in the process highlighting the robustness of their structure to noise. This in turn provides a way to construct memory-efficient quantum models of stochastic processes whilst circumventing certain drawbacks that manifest solely as a result of classical information processing in classical inference protocols.
Stochastic modelling of complex systems plays an essential, yet often computationally intensive role across the quantitative sciences. Recent advances in quantum information processing have elucidated the potential for quantum simulators to exhibit memory advantages for such tasks. Heretofore, the focus has been on lossless memory compression, wherein the advantage is typically in terms of lessening the amount of information tracked by the model, while -- arguably more practical -- reductions in memory dimension are not always possible. Here we address the case of lossy compression for quantum stochastic modelling of continuous-time processes, introducing a method for coarse-graining in quantum state space that drastically reduces the requisite memory dimension for modelling temporal dynamics whilst retaining near-exact statistics. In contrast to classical coarse-graining, this compression is not based on sacrificing temporal resolution, and brings memory-efficient, high-fidelity stochastic modelling within reach of present quantum technologies.
In stochastic modeling, there has been a significant effort towards finding predictive models that predict a stochastic process future using minimal information from its past. Meanwhile, in condensed matter physics, matrix product states (MPS) are known as a particularly efficient representation of 1D spin chains. In this Letter, we associate each stochastic process with a suitable quantum state of a spin chain. We then show that the optimal predictive model for the process leads directly to an MPS representation of the associated quantum state. Conversely, MPS methods offer a systematic construction of the best known quantum predictive models. This connection allows an improved method for computing the quantum memory needed for generating optimal predictions. We prove that this memory coincides with the entanglement of the associated spin chain across the past-future bipartition.
Effective and efficient forecasting relies on identification of the relevant information contained in past observations -- the predictive features -- and isolating it from the rest. When the future of a process bears a strong dependence on its behaviour far into the past, there are many such features to store, necessitating complex models with extensive memories. Here, we highlight a family of stochastic processes whose minimal classical models must devote unboundedly many bits to tracking the past. For this family, we identify quantum models of equal accuracy that can store all relevant information within a single two-dimensional quantum system (qubit). This represents the ultimate limit of quantum compression and highlights an immense practical advantage of quantum technologies for the forecasting and simulation of complex systems.
Stochastic modelling is an essential component of the quantitative sciences, with hidden Markov models (HMMs) often playing a central role. Concurrently, the rise of quantum technologies promises a host of advantages in computational problems, typically in terms of the scaling of requisite resources such as time and memory. HMMs are no exception to this, with recent results highlighting quantum implementations of deterministic HMMs exhibiting superior memory and thermal efficiency relative to their classical counterparts. In many contexts however, non-deterministic HMMs are viable alternatives; compared to them the advantages of current quantum implementations do not always hold. Here, we provide a systematic prescription for constructing quantum implementations of non-deterministic HMMs that re-establish the quantum advantages against this broader class. Crucially, we show that whenever the classical implementation suffers from thermal dissipation due to its need to process information in a time-local manner, our quantum implementations will both mitigate some of this dissipation, and achieve an advantage in memory compression.
This brief article gives an overview of quantum mechanics as a {em quantum probability theory}. It begins with a review of the basic operator-algebraic elements that connect probability theory with quantum probability theory. Then quantum stochastic processes is formulated as a generalization of stochastic processes within the framework of quantum probability theory. Quantum Markov models from quantum optics are used to explicitly illustrate the underlying abstract concepts and their connections to the quantum regression theorem from quantum optics.