ترغب بنشر مسار تعليمي؟ اضغط هنا

Posterior distributions for Hierarchical Spike and Slab Indian Buffet processes

314   0   0.0 ( 0 )
 نشر من قبل Lancelot F. James
 تاريخ النشر 2021
  مجال البحث
والبحث باللغة English




اسأل ChatGPT حول البحث

Bayesian nonparametric hierarchical priors are highly effective in providing flexible models for latent data structures exhibiting sharing of information between and across groups. Most prominent is the Hierarchical Dirichlet Process (HDP), and its subsequent variants, which model latent clustering between and across groups. The HDP, may be viewed as a more flexible extension of Latent Dirichlet Allocation models (LDA), and has been applied to, for example, topic modelling, natural language processing, and datasets arising in health-care. We focus on analogous latent feature allocation models, where the data structures correspond to multisets or unbounded sparse matrices. The fundamental development in this regard is the Hierarchical Indian Buffet process (HIBP), which utilizes a hierarchy of Beta processes over J groups, where each group generates binary random matrices, reflecting within group sharing of features, according to beta-Bernoulli IBP priors. To encompass HI



قيم البحث

اقرأ أيضاً

Suppose that particles are randomly distributed in $bR^d$, and they are subject to identical stochastic motion independently of each other. The Smoluchowski process describes fluctuations of the number of particles in an observation region over time. This paper studies properties of the Smoluchowski processes and considers related statistical problems. In the first part of the paper we revisit probabilistic properties of the Smoluchowski process in a unified and principled way: explicit formulas for generating functionals and moments are derived, conditions for stationarity and Gaussian approximation are discussed, and relations to other stochastic models are highlighted. The second part deals with statistics of the Smoluchowki processes. We consider two different models of the particle displacement process: the undeviated uniform motion (when a particle moves with random constant velocity along a straight line) and the Brownian motion displacement. In the setting of the undeviated uniform motion we study the problems of estimating the mean speed and the speed distribution, while for the Brownian displacement model the problem of estimating the diffusion coefficient is considered. In all these settings we develop estimators with provable accuracy guarantees.
We place an Indian Buffet process (IBP) prior over the structure of a Bayesian Neural Network (BNN), thus allowing the complexity of the BNN to increase and decrease automatically. We further extend this model such that the prior on the structure of each hidden layer is shared globally across all layers, using a Hierarchical-IBP (H-IBP). We apply this model to the problem of resource allocation in Continual Learning (CL) where new tasks occur and the network requires extra resources. Our model uses online variational inference with reparameterisation of the Bernoulli and Beta distributions, which constitute the IBP and H-IBP priors. As we automatically learn the number of weights in each layer of the BNN, overfitting and underfitting problems are largely overcome. We show empirically that our approach offers a competitive edge over existing methods in CL.
211 - Ke Zhai 2012
Latent feature models are attractive for image modeling, since images generally contain multiple objects. However, many latent feature models ignore that objects can appear at different locations or require pre-segmentation of images. While the trans formed Indian buffet process (tIBP) provides a method for modeling transformation-invariant features in unsegmented binary images, its current form is inappropriate for real images because of its computational cost and modeling assumptions. We combine the tIBP with likelihoods appropriate for real images and develop an efficient inference, using the cross-correlation between images and features, that is theoretically and empirically faster than existing inference techniques. Our method discovers reasonable components and achieve effective image reconstruction in natural images.
111 - Zhe Wang , Ryan Martin 2021
In mathematical finance, Levy processes are widely used for their ability to model both continuous variation and abrupt, discontinuous jumps. These jumps are practically relevant, so reliable inference on the feature that controls jump frequencies an d magnitudes, namely, the Levy density, is of critical importance. A specific obstacle to carrying out model-based (e.g., Bayesian) inference in such problems is that, for general Levy processes, the likelihood is intractable. To overcome this obstacle, here we adopt a Gibbs posterior framework that updates a prior distribution using a suitable loss function instead of a likelihood. We establish asymptotic posterior concentration rates for the proposed Gibbs posterior. In particular, in the most interesting and practically relevant case, we give conditions under which the Gibbs posterior concentrates at (nearly) the minimax optimal rate, adaptive to the unknown smoothness of the true Levy density.
In the sparse normal means model, coverage of adaptive Bayesian posterior credible sets associated to spike and slab prior distributions is considered. The key sparsity hyperparameter is calibrated via marginal maximum likelihood empirical Bayes. Fir st, adaptive posterior contraction rates are derived with respect to $d_q$--type--distances for $qleq 2$. Next, under a type of so-called excessive-bias conditions, credible sets are constructed that have coverage of the true parameter at prescribed $1-alpha$ confidence level and at the same time are of optimal diameter. We also prove that the previous conditions cannot be significantly weakened from the minimax perspective.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا