ترغب بنشر مسار تعليمي؟ اضغط هنا

Quantum coarse-graining for extreme dimension reduction in modelling stochastic temporal dynamics

86   0   0.0 ( 0 )
 نشر من قبل Thomas Elliott
 تاريخ النشر 2021
  مجال البحث فيزياء
والبحث باللغة English
 تأليف Thomas J. Elliott




اسأل ChatGPT حول البحث

Stochastic modelling of complex systems plays an essential, yet often computationally intensive role across the quantitative sciences. Recent advances in quantum information processing have elucidated the potential for quantum simulators to exhibit memory advantages for such tasks. Heretofore, the focus has been on lossless memory compression, wherein the advantage is typically in terms of lessening the amount of information tracked by the model, while -- arguably more practical -- reductions in memory dimension are not always possible. Here we address the case of lossy compression for quantum stochastic modelling of continuous-time processes, introducing a method for coarse-graining in quantum state space that drastically reduces the requisite memory dimension for modelling temporal dynamics whilst retaining near-exact statistics. In contrast to classical coarse-graining, this compression is not based on sacrificing temporal resolution, and brings memory-efficient, high-fidelity stochastic modelling within reach of present quantum technologies.



قيم البحث

اقرأ أيضاً

Effective and efficient forecasting relies on identification of the relevant information contained in past observations -- the predictive features -- and isolating it from the rest. When the future of a process bears a strong dependence on its behavi our far into the past, there are many such features to store, necessitating complex models with extensive memories. Here, we highlight a family of stochastic processes whose minimal classical models must devote unboundedly many bits to tracking the past. For this family, we identify quantum models of equal accuracy that can store all relevant information within a single two-dimensional quantum system (qubit). This represents the ultimate limit of quantum compression and highlights an immense practical advantage of quantum technologies for the forecasting and simulation of complex systems.
A growing body of work has established the modelling of stochastic processes as a promising area of application for quantum techologies; it has been shown that quantum models are able to replicate the future statistics of a stochastic process whilst retaining less information about the past than any classical model must -- even for a purely classical process. Such memory-efficient models open a potential future route to study complex systems in greater detail than ever before, and suggest profound consequences for our notions of structure in their dynamics. Yet, to date methods for constructing these quantum models are based on having a prior knowledge of the optimal classical model. Here, we introduce a protocol for blind inference of the memory structure of quantum models -- tailored to take advantage of quantum features -- direct from time-series data, in the process highlighting the robustness of their structure to noise. This in turn provides a way to construct memory-efficient quantum models of stochastic processes whilst circumventing certain drawbacks that manifest solely as a result of classical information processing in classical inference protocols.
328 - Murray Gell-Mann 2006
Our everyday descriptions of the universe are highly coarse-grained, following only a tiny fraction of the variables necessary for a perfectly fine-grained description. Coarse graining in classical physics is made natural by our limited powers of obs ervation and computation. But in the modern quantum mechanics of closed systems, some measure of coarse graining is inescapable because there are no non-trivial, probabilistic, fine-grained descriptions. This essay explores the consequences of that fact. Quantum theory allows for various coarse-grained descriptions some of which are mutually incompatible. For most purposes, however, we are interested in the small subset of ``quasiclassical descriptions defined by ranges of values of averages over small volumes of densities of conserved quantities such as energy and momentum and approximately conserved quantities such as baryon number. The near-conservation of these quasiclassical quantities results in approximate decoherence, predictability, and local equilibrium, leading to closed sets of equations of motion. In any description, information is sacrificed through the coarse graining that yields decoherence and gives rise to probabilities for histories. In quasiclassical descriptions, further information is sacrificed in exhibiting the emergent regularities summarized by classical equations of motion. An appropriate entropy measures the loss of information. For a ``quasiclassical realm this is connected with the usual thermodynamic entropy as obtained from statistical mechanics. It was low for the initial state of our universe and has been increasing since.
In stochastic modeling, there has been a significant effort towards finding predictive models that predict a stochastic process future using minimal information from its past. Meanwhile, in condensed matter physics, matrix product states (MPS) are kn own as a particularly efficient representation of 1D spin chains. In this Letter, we associate each stochastic process with a suitable quantum state of a spin chain. We then show that the optimal predictive model for the process leads directly to an MPS representation of the associated quantum state. Conversely, MPS methods offer a systematic construction of the best known quantum predictive models. This connection allows an improved method for computing the quantum memory needed for generating optimal predictions. We prove that this memory coincides with the entanglement of the associated spin chain across the past-future bipartition.
In studying the predictability of emergent phenomena in complex systems, Israeli & Goldenfeld (Phys. Rev. Lett., 2004; Phys. Rev. E, 2006) showed how to coarse-grain (elementary) cellular automata (CA). Their algorithm for finding coarse-grainings of supercell size $N$ took doubly-exponential $2^{2^N}$-time, and thus only allowed them to explore supercell sizes $N leq 4$. Here we introduce a new, more efficient algorithm for finding coarse-grainings between any two given CA that allows us to systematically explore all elementary CA with supercell sizes up to $N=7$, and to explore individual examples of even larger supercell size. Our algorithm is based on a backtracking search, similar to the DPLL algorithm with unit propagation for the NP-complete problem of Boolean Satisfiability.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا