ﻻ يوجد ملخص باللغة العربية
To analyze whole-genome genetic data inherited in families, the likelihood is typically obtained from a Hidden Markov Model (HMM) having a state space of 2^n hidden states where n is the number of meioses or edges in the pedigree. There have been several attempts to speed up this calculation by reducing the state-space of the HMM. One of these methods has been automated in a calculation that is more efficient than the naive HMM calculation; however, that method treats a special case and the efficiency gain is available for only those rare pedigrees containing long chains of single-child lineages. The other existing state-space reduction method treats the general case, but the existing algorithm has super-exponential running time. We present three formulations of the state-space reduction problem, two dealing with groups and one with partitions. One of these problems, the maximum isometry group problem was discussed in detail by Browning and Browning. We show that for pedigrees, all three of these problems have identical solutions. Furthermore, we are able to prove the uniqueness of the solution using the algorithm that we introduce. This algorithm leverages the insight provided by the equivalence between the partition and group formulations of the problem to quickly find the optimal state-space reduction for general pedigrees. We propose a new likelihood calculation which is a two-stage process: find the optimal state-space, then run the HMM forward-backward algorithm on the optimal state-space. In comparison with the one-stage HMM calculation, this new method more quickly calculates the exact pedigree likelihood.
We consider the filtering of continuous-time finite-state hidden Markov models, where the rate and observation matrices depend on unknown time-dependent parameters, for which no prior or stochastic model is available. We quantify and analyze how the
For concertgoers, musical interpretation is the most important factor in determining whether or not we enjoy a classical performance. Every performance includes mistakes---intonation issues, a lost note, an unpleasant sound---but these are all easily
Labeling of sequential data is a prevalent meta-problem for a wide range of real world applications. While the first-order Hidden Markov Models (HMM) provides a fundamental approach for unsupervised sequential labeling, the basic model does not show
This paper addresses the question of predicting when a positive self-similar Markov process X attains its pathwise global supremum or infimum before hitting zero for the first time (if it does at all). This problem has been studied in Glover et al. (
Hidden Markov Models (HMMs) are one of the most fundamental and widely used statistical tools for modeling discrete time series. In general, learning HMMs from data is computationally hard (under cryptographic assumptions), and practitioners typicall