ترغب بنشر مسار تعليمي؟ اضغط هنا

Sampling from manifold-restricted distributions using tangent bundle projections

68   0   0.0 ( 0 )
 نشر من قبل Alvin Chua
 تاريخ النشر 2018
والبحث باللغة English
 تأليف Alvin J. K. Chua




اسأل ChatGPT حول البحث

A common problem in Bayesian inference is the sampling of target probability distributions at sufficient resolution and accuracy to estimate the probability density, and to compute credible regions. Often by construction, many target distributions can be expressed as some higher-dimensional closed-form distribution with parametrically constrained variables, i.e., one that is restricted to a smooth submanifold of Euclidean space. I propose a derivative-based importance sampling framework for such distributions. A base set of $n$ samples from the target distribution is used to map out the tangent bundle of the manifold, and to seed $nm$ additional points that are projected onto the tangent bundle and weighted appropriately. The method essentially acts as an upsampling complement to any standard algorithm. It is designed for the efficient production of approximate high-resolution histograms from manifold-restricted Gaussian distributions, and can provide large computational savings when sampling directly from the target distribution is expensive.

قيم البحث

اقرأ أيضاً

179 - Kamran Javid 2020
Metropolis Hastings nested sampling evolves a Markov chain, accepting new points along the chain according to a version of the Metropolis Hastings acceptance ratio, which has been modified to satisfy the nested sampling likelihood constraint. The geo metric nested sampling algorithm I present here is based on the Metropolis Hastings method, but treats parameters as though they represent points on certain geometric objects, namely circles, tori and spheres. For parameters which represent points on a circle or torus, the trial distribution is wrapped around the domain of the posterior distribution such that samples cannot be rejected automatically when evaluating the Metropolis ratio due to being outside the sampling domain. Furthermore, this enhances the mobility of the sampler. For parameters which represent coordinates on the surface of a sphere, the algorithm transforms the parameters into a Cartesian coordinate system before sampling which again makes sure no samples are automatically rejected, and provides a physically intuitive way of the sampling the parameter space.
Manifold hypotheses are typically used for tasks such as dimensionality reduction, interpolation, or improving classification performance. In the less common problem of manifold estimation, the task is to characterize the geometric structure of the m anifold in the original ambient space from a sample. We focus on the role that tangent bundle learners (TBL) can play in estimating the underlying manifold from which data is assumed to be sampled. Since the unbounded tangent spaces natively represent a poor manifold estimate, the problem reduces to one of estimating regions in the tangent space where it acts as a relatively faithful linear approximator to the surface of the manifold. Local PCA methods, such as the Mixtures of Probabilistic Principal Component Analyzers method of Tipping and Bishop produce a subset of the tangent bundle of the manifold along with an assignment function that assigns points in the training data used by the TBL to elements of the estimated tangent bundle. We formulate three methods that use the data assigned to each tangent space to estimate the underlying bounded subspaces for which the tangent space is a faithful estimate of the manifold and offer thoughts on how this perspective is theoretically grounded in the manifold assumption. We seek to explore the conceptual and technical challenges that arise in trying to utilize simple TBL methods to arrive at reliable estimates of the underlying manifold.
In real-world Bayesian inference applications, prior assumptions regarding the parameters of interest may be unrepresentative of their actual values for a given dataset. In particular, if the likelihood is concentrated far out in the wings of the ass umed prior distribution, this can lead to extremely inefficient exploration of the resulting posterior by nested sampling algorithms, with unnecessarily high associated computational costs. Simple solutions such as broadening the prior range in such cases might not be appropriate or possible in real-world applications, for example when one wishes to assume a single standardised prior across the analysis of a large number of datasets for which the true values of the parameters of interest may vary. This work therefore introduces a posterior repartitioning (PR) method for nested sampling algorithms, which addresses the problem by redefining the likelihood and prior while keeping their product fixed, so that the posterior inferences and evidence estimates remain unchanged but the efficiency of the nested sampling process is significantly increased. Numerical results show that the PR method provides a simple yet powerful refinement for nested sampling algorithms to address the issue of unrepresentative priors.
167 - Rui Albuquerque 2016
We find a new class of invariant metrics existing on the tangent bundle of any given almost-Hermitian manifold. We focus here on the case of Riemannian surfaces, which yield new examples of Kahlerian Ricci-flat manifolds in four real dimensions.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا