ﻻ يوجد ملخص باللغة العربية
Circular variables arise in a multitude of data-modelling contexts ranging from robotics to the social sciences, but they have been largely overlooked by the machine learning community. This paper partially redresses this imbalance by extending some standard probabilistic modelling tools to the circular domain. First we introduce a new multivariate distribution over circular variables, called the multivariate Generalised von Mises (mGvM) distribution. This distribution can be constructed by restricting and renormalising a general multivariate Gaussian distribution to the unit hyper-torus. Previously proposed multivariate circular distributions are shown to be special cases of this construction. Second, we introduce a new probabilistic model for circular regression, that is inspired by Gaussian Processes, and a method for probabilistic principal component analysis with circular hidden variables. These models can leverage standard modelling tools (e.g. covariance functions and methods for automatic relevance determination). Third, we show that the posterior distribution in these models is a mGvM distribution which enables development of an efficient variational free-energy scheme for performing approximate inference and approximate maximum-likelihood learning.
Deheuvels [J. Multivariate Anal. 11 (1981) 102--113] and Genest and R{e}millard [Test 13 (2004) 335--369] have shown that powerful rank tests of multivariate independence can be based on combinations of asymptotically independent Cram{e}r--von Mises
Robust estimation of location and concentration parameters for the von Mises-Fisher distribution is discussed. A key reparametrisation is achieved by expressing the two parameters as one vector on the Euclidean space. With this representation, we fir
This work develops rigorous theoretical basis for the fact that deep Bayesian neural network (BNN) is an effective tool for high-dimensional variable selection with rigorous uncertainty quantification. We develop new Bayesian non-parametric theorems
We revisit empirical Bayes in the absence of a tractable likelihood function, as is typical in scientific domains relying on computer simulations. We investigate how the empirical Bayesian can make use of neural density estimators first to use all no
Let $F_N$ and $F$ be the empirical and limiting spectral distributions of an $Ntimes N$ Wigner matrix. The Cram{e}r-von Mises (CvM) statistic is a classical goodness-of-fit statistic that characterizes the distance between $F_N$ and $F$ in $ell^2$-no