ترغب بنشر مسار تعليمي؟ اضغط هنا

Optimal State-Space Reduction for Pedigree Hidden Markov Models

165   0   0.0 ( 0 )
 نشر من قبل Bonnie Kirkpatrick
 تاريخ النشر 2012
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

To analyze whole-genome genetic data inherited in families, the likelihood is typically obtained from a Hidden Markov Model (HMM) having a state space of 2^n hidden states where n is the number of meioses or edges in the pedigree. There have been several attempts to speed up this calculation by reducing the state-space of the HMM. One of these methods has been automated in a calculation that is more efficient than the naive HMM calculation; however, that method treats a special case and the efficiency gain is available for only those rare pedigrees containing long chains of single-child lineages. The other existing state-space reduction method treats the general case, but the existing algorithm has super-exponential running time. We present three formulations of the state-space reduction problem, two dealing with groups and one with partitions. One of these problems, the maximum isometry group problem was discussed in detail by Browning and Browning. We show that for pedigrees, all three of these problems have identical solutions. Furthermore, we are able to prove the uniqueness of the solution using the algorithm that we introduce. This algorithm leverages the insight provided by the equivalence between the partition and group formulations of the problem to quickly find the optimal state-space reduction for general pedigrees. We propose a new likelihood calculation which is a two-stage process: find the optimal state-space, then run the HMM forward-backward algorithm on the optimal state-space. In comparison with the one-stage HMM calculation, this new method more quickly calculates the exact pedigree likelihood.



قيم البحث

اقرأ أيضاً

102 - Andrew L. Allan 2020
We consider the filtering of continuous-time finite-state hidden Markov models, where the rate and observation matrices depend on unknown time-dependent parameters, for which no prior or stochastic model is available. We quantify and analyze how the induced uncertainty may be propagated through time as we collect new observations, and used to simultaneously provide robust estimates of the hidden signal and to learn the unknown parameters, via techniques based on pathwise filtering and new results on the optimal control of rough differential equations.
For concertgoers, musical interpretation is the most important factor in determining whether or not we enjoy a classical performance. Every performance includes mistakes---intonation issues, a lost note, an unpleasant sound---but these are all easily forgotten (or unnoticed) when a performer engages her audience, imbuing a piece with novel emotional content beyond the vague instructions inscribed on the printed page. While music teachers use imagery or heuristic guidelines to motivate interpretive decisions, combining these vague instructions to create a convincing performance remains the domain of the performer, subject to the whims of the moment, technical fluency, and taste. In this research, we use data from the CHARM Mazurka Project---forty-six professional recordings of Chopins Mazurka Op. 63 No. 3 by consumate artists---with the goal of elucidating musically interpretable performance decisions. Using information on the inter-onset intervals of the note attacks in the recordings, we apply functional data analysis techniques enriched with prior information gained from music theory to discover relevant features and perform hierarchical clustering. The resulting clusters suggest methods for informing music instruction, discovering listening preferences, and analyzing performances.
Labeling of sequential data is a prevalent meta-problem for a wide range of real world applications. While the first-order Hidden Markov Models (HMM) provides a fundamental approach for unsupervised sequential labeling, the basic model does not show satisfying performance when it is directly applied to real world problems, such as part-of-speech tagging (PoS tagging) and optical character recognition (OCR). Aiming at improving performance, important extensions of HMM have been proposed in the literatures. One of the common key features in these extensions is the incorporation of proper prior information. In this paper, we propose a new extension of HMM, termed diversified Hidden Markov Models (dHMM), which utilizes a diversity-encouraging prior over the state-transition probabilities and thus facilitates more dynamic sequential labellings. Specifically, the diversity is modeled by a continuous determinantal point process prior, which we apply to both unsupervised and supervised scenarios. Learning and inference algorithms for dHMM are derived. Empirical evaluations on benchmark datasets for unsupervised PoS tagging and supervised OCR confirmed the effectiveness of dHMM, with competitive performance to the state-of-the-art.
This paper addresses the question of predicting when a positive self-similar Markov process X attains its pathwise global supremum or infimum before hitting zero for the first time (if it does at all). This problem has been studied in Glover et al. ( 2013) under the assumption that X is a positive transient diffusion. We extend their result to the class of positive self-similar Markov processes by establishing a link to Baurdoux and van Schaik (2013), where the same question is studied for a Levy process drifting to minus infinity. The connection to Baurdoux and van Schaik (2013) relies on the so-called Lamperti transformation which links the class of positive self-similar Markov processes with that of Levy processes. Our approach will reveal that the results in Glover et al. (2013) for Bessel processes can also be seen as a consequence of self-similarity.
Hidden Markov Models (HMMs) are one of the most fundamental and widely used statistical tools for modeling discrete time series. In general, learning HMMs from data is computationally hard (under cryptographic assumptions), and practitioners typicall y resort to search heuristics which suffer from the usual local optima issues. We prove that under a natural separation condition (bounds on the smallest singular value of the HMM parameters), there is an efficient and provably correct algorithm for learning HMMs. The sample complexity of the algorithm does not explicitly depend on the number of distinct (discrete) observations---it implicitly depends on this quantity through spectral properties of the underlying HMM. This makes the algorithm particularly applicable to settings with a large number of observations, such as those in natural language processing where the space of observation is sometimes the words in a language. The algorithm is also simple, employing only a singular value decomposition and matrix multiplications.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا