ترغب بنشر مسار تعليمي؟ اضغط هنا

Void Probabilities and Cauchy-Schwarz Divergence for Generalized Labeled Multi-Bernoulli Models

308   0   0.0 ( 0 )
 نشر من قبل Michael Beard
 تاريخ النشر 2015
  مجال البحث الاحصاء الرياضي
والبحث باللغة English




اسأل ChatGPT حول البحث

The generalized labeled multi-Bernoulli (GLMB) is a family of tractable models that alleviates the limitations of the Poisson family in dynamic Bayesian inference of point processes. In this paper, we derive closed form expressions for the void probability functional and the Cauchy-Schwarz divergence for GLMBs. The proposed analytic void probability functional is a necessary and sufficient statistic that uniquely characterizes a GLMB, while the proposed analytic Cauchy-Schwarz divergence provides a tractable measure of similarity between GLMBs. We demonstrate the use of both results on a partially observed Markov decision process for GLMBs, with Cauchy-Schwarz divergence based reward, and void probability constraint.



قيم البحث

اقرأ أيضاً

A multiple maneuvering target system can be viewed as a Jump Markov System (JMS) in the sense that the target movement can be modeled using different motion models where the transition between the motion models by a particular target follows a Markov chain probability rule. This paper describes a Generalized Labelled Multi-Bernoulli (GLMB) filter for tracking maneuvering targets whose movement can be modeled via such a JMS. The proposed filter is validated with two linear and nonlinear maneuvering target tracking examples.
Recent work in unsupervised learning has focused on efficient inference and learning in latent variables models. Training these models by maximizing the evidence (marginal likelihood) is typically intractable. Thus, a common approximation is to maxim ize the Evidence Lower BOund (ELBO) instead. Variational autoencoders (VAE) are a powerful and widely-used class of generative models that optimize the ELBO efficiently for large datasets. However, the VAEs default Gaussian choice for the prior imposes a strong constraint on its ability to represent the true posterior, thereby degrading overall performance. A Gaussian mixture model (GMM) would be a richer prior, but cannot be handled efficiently within the VAE framework because of the intractability of the Kullback-Leibler divergence for GMMs. We deviate from the common VAE framework in favor of one with an analytical solution for Gaussian mixture prior. To perform efficient inference for GMM priors, we introduce a new constrained objective based on the Cauchy-Schwarz divergence, which can be computed analytically for GMMs. This new objective allows us to incorporate richer, multi-modal priors into the autoencoding framework. We provide empirical studies on a range of datasets and show that our objective improves upon variational auto-encoding models in density estimation, unsupervised clustering, semi-supervised learning, and face analysis.
We introduce a notion of complexity for systems of linear forms called sequential Cauchy-Schwarz complexity, which is parametrized by two positive integers $k,ell$ and refines the notion of Cauchy-Schwarz complexity introduced by Green and Tao. We pr ove that if a system of linear forms has sequential Cauchy-Schwarz complexity at most $(k,ell)$ then any average of 1-bounded functions over this system is controlled by the $2^{1-ell}$-th power of the Gowers $U^{k+1}$-norms of the functions. For $ell=1$ this agrees with Cauchy-Schwarz complexity, but for $ell>1$ there are families of systems that have sequential Cauchy-Schwarz complexity at most $(k,ell)$ whereas their Cauchy-Schwarz complexity is greater than $k$. For instance, for $p$ prime and $kin mathbb{N}$, the system of forms $big{phi_{z_1,z_2}(x,t_1,t_2)= x+z_1 t_1+z_2t_2;|; z_1,z_2in [0,p-1], z_1+z_2<kbig}$ can be viewed as a $2$-dimensional analogue of arithmetic progressions of length $k$. We prove that this system has sequential Cauchy-Schwarz complexity at most $(k-2,ell)$ for some $ell=O_{k,p}(1)$, even for $p<k$, whereas its Cauchy-Schwarz complexity can be strictly greater than $k-2$. In fact we prove this for the $M$-dimensional analogues of these systems for any $Mgeq 2$, obtaining polynomial true-complexity bounds for these and other families of systems. In a separate paper, we use these results to give a new proof of the inverse theorem for Gowers norms on vector spaces $mathbb{F}_p^n$, and applications concerning ergodic actions of $mathbb{F}_p^{omega}$.
BDSAR is an R package which estimates distances between probability distributions and facilitates a dynamic and powerful analysis of diagnostics for Bayesian models from the class of Simultaneous Autoregressive (SAR) spatial models. The package offer s a new and fine plot to compare models as well as it works in an intuitive way to allow any analyst to easily build fine plots. These are helpful to promote insights about influential observations in the data.
In applications of imprecise probability, analysts must compute lower (or upper) expectations, defined as the infimum of an expectation over a set of parameter values. Monte Carlo methods consistently approximate expectations at fixed parameter value s, but can be costly to implement in grid search to locate minima over large subsets of the parameter space. We investigate the use of stochastic iterative root-finding methods for efficiently computing lower expectations. In two examples we illustrate the use of various stochastic approximation methods, and demonstrate their superior performance in comparison to grid search.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا