ترغب بنشر مسار تعليمي؟ اضغط هنا

Bayesian Learning in Undirected Graphical Models: Approximate MCMC algorithms

141   0   0.0 ( 0 )
 نشر من قبل Iain Murray
 تاريخ النشر 2012
والبحث باللغة English




اسأل ChatGPT حول البحث

Bayesian learning in undirected graphical models|computing posterior distributions over parameters and predictive quantities is exceptionally difficult. We conjecture that for general undirected models, there are no tractable MCMC (Markov Chain Monte Carlo) schemes giving the correct equilibrium distribution over parameters. While this intractability, due to the partition function, is familiar to those performing parameter optimisation, Bayesian learning of posterior distributions over undirected model parameters has been unexplored and poses novel challenges. we propose several approximate MCMC schemes and test on fully observed binary models (Boltzmann machines) for a small coronary heart disease data set and larger artificial systems. While approximations must perform well on the model, their interaction with the sampling scheme is also important. Samplers based on variational mean- field approximations generally performed poorly, more advanced methods using loopy propagation, brief sampling and stochastic dynamics lead to acceptable parameter posteriors. Finally, we demonstrate these techniques on a Markov random field with hidden variables.



قيم البحث

اقرأ أيضاً

Despite having various attractive qualities such as high prediction accuracy and the ability to quantify uncertainty and avoid over-fitting, Bayesian Matrix Factorization has not been widely adopted because of the prohibitive cost of inference. In th is paper, we propose a scalable distributed Bayesian matrix factorization algorithm using stochastic gradient MCMC. Our algorithm, based on Distributed Stochastic Gradient Langevin Dynamics, can not only match the prediction accuracy of standard MCMC methods like Gibbs sampling, but at the same time is as fast and simple as stochastic gradient descent. In our experiments, we show that our algorithm can achieve the same level of prediction accuracy as Gibbs sampling an order of magnitude faster. We also show that our method reduces the prediction error as fast as distributed stochastic gradient descent, achieving a 4.1% improvement in RMSE for the Netflix dataset and an 1.8% for the Yahoo music dataset.
We study batch normalisation in the context of variational inference methods in Bayesian neural networks, such as mean-field or MC Dropout. We show that batch-normalisation does not affect the optimum of the evidence lower bound (ELBO). Furthermore, we study the Monte Carlo Batch Normalisation (MCBN) algorithm, proposed as an approximate inference technique parallel to MC Dropout, and show that for larger batch sizes, MCBN fails to capture epistemic uncertainty. Finally, we provide insights into what is required to fix this failure, namely having to view the mini-batch size as a variational parameter in MCBN. We comment on the asymptotics of the ELBO with respect to this variational parameter, showing that as dataset size increases towards infinity, the batch-size must increase towards infinity as well for MCBN to be a valid approximate inference technique.
The hardcore model on a graph $G$ with parameter $lambda>0$ is a probability measure on the collection of all independent sets of $G$, that assigns to each independent set $I$ a probability proportional to $lambda^{|I|}$. In this paper we consider th e problem of estimating the parameter $lambda$ given a single sample from the hardcore model on a graph $G$. To bypass the computational intractability of the maximum likelihood method, we use the maximum pseudo-likelihood (MPL) estimator, which for the hardcore model has a surprisingly simple closed form expression. We show that for any sequence of graphs ${G_N}_{Ngeq 1}$, where $G_N$ is a graph on $N$ vertices, the MPL estimate of $lambda$ is $sqrt N$-consistent, whenever the graph sequence has uniformly bounded average degree. We then derive sufficient conditions under which the MPL estimate of the activity parameters is $sqrt N$-consistent given a single sample from a general $H$-coloring model, in which restrictions between adjacent colors are encoded by a constraint graph $H$. We verify the sufficient conditions for models where there is at least one unconstrained color as long as the graph sequence has uniformly bounded average degree. This applies to many $H$-coloring examples such as the Widom-Rowlinson and multi-state hard-core models. On the other hand, for the $q$-coloring model, which falls outside this class, we show that consistent estimation may be impossible even for graphs with bounded average degree. Nevertheless, we show that the MPL estimate is $sqrt N$-consistent in the $q$-coloring model when ${G_N}_{Ngeq 1}$ has bounded average double neighborhood. The presence of hard constraints, as opposed to soft constraints, leads to new challenges, and our proofs entail applications of the method of exchangeable pairs as well as combinatorial arguments that employ the probabilistic method.
Graphical models are useful tools for describing structured high-dimensional probability distributions. Development of efficient algorithms for learning graphical models with least amount of data remains an active research topic. Reconstruction of gr aphical models that describe the statistics of discrete variables is a particularly challenging problem, for which the maximum likelihood approach is intractable. In this work, we provide the first sample-efficient method based on the Interaction Screening framework that allows one to provably learn fully general discrete factor models with node-specific discrete alphabets and multi-body interactions, specified in an arbitrary basis. We identify a single condition related to model parametrization that leads to rigorous guarantees on the recovery of model structure and parameters in any error norm, and is readily verifiable for a large class of models. Importantly, our bounds make explicit distinction between parameters that are proper to the model and priors used as an input to the algorithm. Finally, we show that the Interaction Screening framework includes all models previously considered in the literature as special cases, and for which our analysis shows a systematic improvement in sample complexity.
In this paper, we propose an analytical method for performing tractable approximate Gaussian inference (TAGI) in Bayesian neural networks. The method enables the analytical Gaussian inference of the posterior mean vector and diagonal covariance matri x for weights and biases. The method proposed has a computational complexity of $mathcal{O}(n)$ with respect to the number of parameters $n$, and the tests performed on regression and classification benchmarks confirm that, for a same network architecture, it matches the performance of existing methods relying on gradient backpropagation.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا