ترغب بنشر مسار تعليمي؟ اضغط هنا

Fisher informations and local asymptotic normality for continuous-time quantum Markov processes

85   0   0.0 ( 0 )
 نشر من قبل Madalin Guta
 تاريخ النشر 2014
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

We consider the problem of estimating an arbitrary dynamical parameter of an quantum open system in the input-output formalism. For irreducible Markov processes, we show that in the limit of large times the system-output state can be approximated by a quantum Gaussian state whose mean is proportional to the unknown parameter. This approximation holds locally in a neighbourhood of size $t^{-1/2}$ in the parameter space, and provides an explicit expression of the asymptotic quantum Fisher information in terms of the Markov generator. Furthermore we show that additive statistics of the counting and homodyne measurements also satisfy local asymptotic normality and we compute the corresponding classical Fisher informations. The mathematical theorems are illustrated with the examples of a two-level system and the atom maser. Our results contribute towards a better understanding of the statistical and probabilistic properties of the output process, with relevance for quantum control engineering, and the theory of non-equilibrium quantum open systems.



قيم البحث

اقرأ أيضاً

70 - Ming Xu , Jingyi Mei , Ji Guan 2021
Verifying quantum systems has attracted a lot of interests in the last decades. In this paper, we initialised the model checking of quantum continuous-time Markov chain (QCTMC). As a real-time system, we specify the temporal properties on QCTMC by si gnal temporal logic (STL). To effectively check the atomic propositions in STL, we develop a state-of-art real root isolation algorithm under Schanuels conjecture; further, we check the general STL formula by interval operations with a bottom-up fashion, whose query complexity turns out to be linear in the size of the input formula by calling the real root isolation algorithm. A running example of an open quantum walk is provided to demonstrate our method.
The objective of this work is to study continuous-time Markov decision processes on a general Borel state space with both impulsive and continuous controls for the infinite-time horizon discounted cost. The continuous-time controlled process is shown to be non explosive under appropriate hypotheses. The so-called Bellman equation associated to this control problem is studied. Sufficient conditions ensuring the existence and the uniqueness of a bounded measurable solution to this optimality equation are provided. Moreover, it is shown that the value function of the optimization problem under consideration satisfies this optimality equation. Sufficient conditions are also presented to ensure on one hand the existence of an optimal control strategy and on the other hand the existence of an $varepsilon$-optimal control strategy. The decomposition of the state space in two disjoint subsets is exhibited where roughly speaking, one should apply a gradual action or an impulsive action correspondingly to get an optimal or $varepsilon$-optimal strategy. An interesting consequence of our previous results is as follows: the set of strategies that allow interventions at time $t=0$ and only immediately after natural jumps is a sufficient set for the control problem under consideration.
This paper extends to Continuous-Time Jump Markov Decision Processes (CTJMDP) the classic result for Markov Decision Processes stating that, for a given initial state distribution, for every policy there is a (randomized) Markov policy, which can be defined in a natural way, such that at each time instance the marginal distributions of state-action pairs for these two policies coincide. It is shown in this paper that this equality takes place for a CTJMDP if the corresponding Markov policy defines a nonexplosive jump Markov process. If this Markov process is explosive, then at each time instance the marginal probability, that a state-action pair belongs to a measurable set of state-action pairs, is not greater for the described Markov policy than the same probability for the original policy. These results are used in this paper to prove that for expected discounted total costs and for average costs per unit time, for a given initial state distribution, for each policy for a CTJMDP the described a Markov policy has the same or better performance.
This paper describes the structure of solutions to Kolmogorovs equations for nonhomogeneous jump Markov processes and applications of these results to control of jump stochastic systems. These equations were studied by Feller (1940), who clarified in 1945 in the errata to that paper that some of its results covered only nonexplosive Markov processes. We present the results for possibly explosive Markov processes. The paper is based on the invited talk presented by the authors at the International Conference dedicated to the 200th anniversary of the birth of P. L.~Chebyshev.
In this paper, we generalize the property of local asymptotic normality (LAN) to an enlarged neighborhood, under the name of rescaled local asymptotic normality (RLAN). We obtain sufficient conditions for a regular parametric model to satisfy RLAN. W e show that RLAN supports the construction of a statistically efficient estimator which maximizes a cubic approximation to the log-likelihood on this enlarged neighborhood. In the context of Monte Carlo inference, we find that this maximum cubic likelihood estimator can maintain its statistical efficiency in the presence of asymptotically increasing Monte Carlo error in likelihood evaluation.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا