ﻻ يوجد ملخص باللغة العربية
Bayesian nonparametric hierarchical priors are highly effective in providing flexible models for latent data structures exhibiting sharing of information between and across groups. Most prominent is the Hierarchical Dirichlet Process (HDP), and its subsequent variants, which model latent clustering between and across groups. The HDP, may be viewed as a more flexible extension of Latent Dirichlet Allocation models (LDA), and has been applied to, for example, topic modelling, natural language processing, and datasets arising in health-care. We focus on analogous latent feature allocation models, where the data structures correspond to multisets or unbounded sparse matrices. The fundamental development in this regard is the Hierarchical Indian Buffet process (HIBP), which utilizes a hierarchy of Beta processes over J groups, where each group generates binary random matrices, reflecting within group sharing of features, according to beta-Bernoulli IBP priors. To encompass HI
Suppose that particles are randomly distributed in $bR^d$, and they are subject to identical stochastic motion independently of each other. The Smoluchowski process describes fluctuations of the number of particles in an observation region over time.
We place an Indian Buffet process (IBP) prior over the structure of a Bayesian Neural Network (BNN), thus allowing the complexity of the BNN to increase and decrease automatically. We further extend this model such that the prior on the structure of
Latent feature models are attractive for image modeling, since images generally contain multiple objects. However, many latent feature models ignore that objects can appear at different locations or require pre-segmentation of images. While the trans
In mathematical finance, Levy processes are widely used for their ability to model both continuous variation and abrupt, discontinuous jumps. These jumps are practically relevant, so reliable inference on the feature that controls jump frequencies an
In the sparse normal means model, coverage of adaptive Bayesian posterior credible sets associated to spike and slab prior distributions is considered. The key sparsity hyperparameter is calibrated via marginal maximum likelihood empirical Bayes. Fir