No Arabic abstract
Latent feature models are widely used to decompose data into a small number of components. Bayesian nonparametric variants of these models, which use the Indian buffet process (IBP) as a prior over latent features, allow the number of features to be determined from the data. We present a generalization of the IBP, the distance dependent Indian buffet process (dd-IBP), for modeling non-exchangeable data. It relies on distances defined between data points, biasing nearby data to share more features. The choice of distance measure allows for many kinds of dependencies, including temporal and spatial. Further, the original IBP is a special case of the dd-IBP. In this paper, we develop the dd-IBP and theoretically characterize its feature-sharing properties. We derive a Markov chain Monte Carlo sampler for a linear Gaussian model with a dd-IBP prior and study its performance on several non-exchangeable data sets.
We propose a probabilistic model to infer supervised latent variables in the Hamming space from observed data. Our model allows simultaneous inference of the number of binary latent variables, and their values. The latent variables preserve neighbourhood structure of the data in a sense that objects in the same semantic concept have similar latent values, and objects in different concepts have dissimilar latent values. We formulate the supervised infinite latent variable problem based on an intuitive principle of pulling objects together if they are of the same type, and pushing them apart if they are not. We then combine this principle with a flexible Indian Buffet Process prior on the latent variables. We show that the inferred supervised latent variables can be directly used to perform a nearest neighbour search for the purpose of retrieval. We introduce a new application of dynamically extending hash codes, and show how to effectively couple the structure of the hash codes with continuously growing structure of the neighbourhood preserving infinite latent feature space.
Statistical models with latent structure have a history going back to the 1950s and have seen widespread use in the social sciences and, more recently, in computational biology and in machine learning. Here we study the basic latent class model proposed originally by the sociologist Paul F. Lazarfeld for categorical variables, and we explain its geometric structure. We draw parallels between the statistical and geometric properties of latent class models and we illustrate geometrically the causes of many problems associated with maximum likelihood estimation and related statistical inference. In particular, we focus on issues of non-identifiability and determination of the model dimension, of maximization of the likelihood function and on the effect of symmetric data. We illustrate these phenomena with a variety of synthetic and real-life tables, of different dimension and complexity. Much of the motivation for this work stems from the 100 Swiss Francs problem, which we introduce and describe in detail.
In this paper we derive locally D-optimal designs for discrete choice experiments based on multinomial probit models. These models include several discrete explanatory variables as well as a quantitative one. The commonly used multinomial logit model assumes independent utilities for different choice options. Thus, D-optimal optimal designs for such multinomial logit models may comprise choice sets, e.g., consisting of alternatives which are identical in all discrete attributes but different in the quantitative variable. Obviously such designs are not appropriate for many empirical choice experiments. It will be shown that locally D-optimal designs for multinomial probit models supposing independent utilities consist of counterintuitive choice sets as well. However, locally D-optimal designs for multinomial probit models allowing for dependent utilities turn out to be reasonable for analyzing decisions using discrete choice studies.
Latent tree models are graphical models defined on trees, in which only a subset of variables is observed. They were first discussed by Judea Pearl as tree-decomposable distributions to generalise star-decomposable distributions such as the latent class model. Latent tree models, or their submodels, are widely used in: phylogenetic analysis, network tomography, computer vision, causal modeling, and data clustering. They also contain other well-known classes of models like hidden Markov models, Brownian motion tree model, the Ising model on a tree, and many popular models used in phylogenetics. This article offers a concise introduction to the theory of latent tree models. We emphasise the role of tree metrics in the structural description of this model class, in designing learning algorithms, and in understanding fundamental limits of what and when can be learned.
The drawbacks in the formulations of random infinite divisibility in Sandhya (1991, 1996), Gnedenko and Korelev (1996), Klebanov and Rachev (1996), Bunge (1996) and Kozubowski and Panorska (1996) are pointed out. For any given Laplace transform, we conceive random (N) infinite divisibility w.r.t a class of probability generating functions derived from the Laplace transform itself. This formulation overcomes the said drawbacks, and the class of probability generating functions is useful in transfer theorems for sums and maximums in general. Generalizing the concepts of attraction (and partial attraction) in the classical and the geometric summation setup to our formulation we show that the domains of attraction (and partial attraction)in all these setups are same. We also establish a necessary and sufficient condition for the convergence to infinitely divisible laws from that of an N-sum and conversely, that is an analogue of Theorem.4.6.5 in Gnedenko and Korelev (1996, p.149). The role of the divisibiltiy of N and the Laplace transform on that of this formulation is also discussed.