Do you want to publish a course? Click here

Robust Filtering and Propagation of Uncertainty in Hidden Markov Models

103   0   0.0 ( 0 )
 Added by Andrew Allan
 Publication date 2020
  fields
and research's language is English




Ask ChatGPT about the research

We consider the filtering of continuous-time finite-state hidden Markov models, where the rate and observation matrices depend on unknown time-dependent parameters, for which no prior or stochastic model is available. We quantify and analyze how the induced uncertainty may be propagated through time as we collect new observations, and used to simultaneously provide robust estimates of the hidden signal and to learn the unknown parameters, via techniques based on pathwise filtering and new results on the optimal control of rough differential equations.



rate research

Read More

80 - Samuel N. Cohen 2016
We consider the problem of filtering an unseen Markov chain from noisy observations, in the presence of uncertainty regarding the parameters of the processes involved. Using the theory of nonlinear expectations, we describe the uncertainty in terms of a penalty function, which can be propagated forward in time in the place of the filter.
To analyze whole-genome genetic data inherited in families, the likelihood is typically obtained from a Hidden Markov Model (HMM) having a state space of 2^n hidden states where n is the number of meioses or edges in the pedigree. There have been several attempts to speed up this calculation by reducing the state-space of the HMM. One of these methods has been automated in a calculation that is more efficient than the naive HMM calculation; however, that method treats a special case and the efficiency gain is available for only those rare pedigrees containing long chains of single-child lineages. The other existing state-space reduction method treats the general case, but the existing algorithm has super-exponential running time. We present three formulations of the state-space reduction problem, two dealing with groups and one with partitions. One of these problems, the maximum isometry group problem was discussed in detail by Browning and Browning. We show that for pedigrees, all three of these problems have identical solutions. Furthermore, we are able to prove the uniqueness of the solution using the algorithm that we introduce. This algorithm leverages the insight provided by the equivalence between the partition and group formulations of the problem to quickly find the optimal state-space reduction for general pedigrees. We propose a new likelihood calculation which is a two-stage process: find the optimal state-space, then run the HMM forward-backward algorithm on the optimal state-space. In comparison with the one-stage HMM calculation, this new method more quickly calculates the exact pedigree likelihood.
134 - D. Crisan , J. Diehl , P. K. Friz 2012
In the late seventies, Clark [In Communication Systems and Random Process Theory (Proc. 2nd NATO Advanced Study Inst., Darlington, 1977) (1978) 721-734, Sijthoff & Noordhoff] pointed out that it would be natural for $pi_t$, the solution of the stochastic filtering problem, to depend continuously on the observed data $Y={Y_s,sin[0,t]}$. Indeed, if the signal and the observation noise are independent one can show that, for any suitably chosen test function $f$, there exists a continuous map $theta^f_t$, defined on the space of continuous paths $C([0,t],mathbb{R}^d)$ endowed with the uniform convergence topology such that $pi_t(f)=theta^f_t(Y)$, almost surely; see, for example, Clark [In Communication Systems and Random Process Theory (Proc. 2nd NATO Advanced Study Inst., Darlington, 1977) (1978) 721-734, Sijthoff & Noordhoff], Clark and Crisan [Probab. Theory Related Fields 133 (2005) 43-56], Davis [Z. Wahrsch. Verw. Gebiete 54 (1980) 125-139], Davis [Teor. Veroyatn. Primen. 27 (1982) 160-167], Kushner [Stochastics 3 (1979) 75-83]. As shown by Davis and Spathopoulos [SIAM J. Control Optim. 25 (1987) 260-278], Davis [In Stochastic Systems: The Mathematics of Filtering and Identification and Applications, Proc. NATO Adv. Study Inst. Les Arcs, Savoie, France 1980 505-528], [In The Oxford Handbook of Nonlinear Filtering (2011) 403-424 Oxford Univ. Press], this type of robust representation is also possible when the signal and the observation noise are correlated, provided the observation process is scalar. For a general correlated noise and multidimensional observations such a representation does not exist. By using the theory of rough paths we provide a solution to this deficiency: the observation process $Y$ is lifted to the process $mathbf{Y}$ that consists of $Y$ and its corresponding L{e}vy area process, and we show that there exists a continuous map $theta_t^f$, defined on a suitably chosen space of H{o}lder continuous paths such that $pi_t(f)=theta_t^f(mathbf{Y})$, almost surely.
142 - Jungyeul Park 2015
Hidden Markov Models (HMMs) are learning methods for pattern recognition. The probabilistic HMMs have been one of the most used techniques based on the Bayesian model. First-order probabilistic HMMs were adapted to the theory of belief functions such that Bayesian probabilities were replaced with mass functions. In this paper, we present a second-order Hidden Markov Model using belief functions. Previous works in belief HMMs have been focused on the first-order HMMs. We extend them to the second-order model.
We provide a sufficient criterion for the unique parameter identification of combinatorially symmetric Hidden Markov Models based on the structure of their transition matrix. If the observed states of the chain form a zero forcing set of the graph of the Markov model then it is uniquely identifiable and an explicit reconstruction method is given.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا