In this paper, we develop asymptotic theories for a class of latent variable models for large-scale multi-relational networks. In particular, we establish consistency results and asymptotic error bounds for the (penalized) maximum likelihood estimators when the size of the network tends to infinity. The basic technique is to develop a non-asymptotic error bound for the maximum likelihood estimators through large deviations analysis of random fields. We also show that these estimators are nearly optimal in terms of minimax risk.
We study the statistical properties of stochastic evolution equations driven by space-only noise, either additive or multiplicative. While forward problems, such as existence, uniqueness, and regularity of the solution, for such equations have been studied, little is known about inverse problems for these equations. We exploit the somewhat unusual structure of the observations coming from these equations that leads to an interesting interplay between classical and non-traditional statistical models. We derive several types of estimators for the drift and/or diffusion coefficients of these equations, and prove their relevant properties.
This work presents a statistical analysis of a class of jointly optimized beamformer-assisted acoustic echo cancelers (AEC) with the beamformer (BF) implemented in the Generalized Sidelobe Canceler (GSC) form and using the least-mean square (LMS) algorithm. The analysis considers the possibility of independent convergence control for the BF and the AEC. The resulting models permit the study of system performance under typical handling of double-talk and channel changes. We show that the joint optimization of the BF-AEC is equivalent to a linearly-constrained minimum variance problem. Hence, the derived analytical model can be used to predict the transient performance of general adaptive wideband beamformers. We study the transient and steady-state behaviors of the residual mean echo power for stationary Gaussian inputs. A convergence analysis leads to stability bounds for the step-size matrix and design guidelines are derived from the analytical models. Monte Carlo simulations illustrate the accuracy of the theoretical models and the applicability of the proposed design guidelines. Examples include operation under mild degrees of nonstationarity. Finally, we show how a high convergence rate can be achieved using a quasi-Newton adaptation scheme in which the step-size matrix is designed to whiten the combined input vector.
Many, if not most network analysis algorithms have been designed specifically for single-relational networks; that is, networks in which all edges are of the same type. For example, edges may either represent friendship, kinship, or collaboration, but not all of them together. In contrast, a multi-relational network is a network with a heterogeneous set of edge labels which can represent relationships of various types in a single data structure. While multi-relational networks are more expressive in terms of the variety of relationships they can capture, there is a need for a general framework for transferring the many single-relational network analysis algorithms to the multi-relational domain. It is not sufficient to execute a single-relational network analysis algorithm on a multi-relational network by simply ignoring edge labels. This article presents an algebra for mapping multi-relational networks to single-relational networks, thereby exposing them to single-relational network analysis algorithms.
There are various approaches to the problem of how one is supposed to conduct a statistical analysis. Different analyses can lead to contradictory conclusions in some problems so this is not a satisfactory state of affairs. It seems that all approaches make reference to the evidence in the data concerning questions of interest as a justification for the methodology employed. It is fair to say, however, that none of the most commonly used methodologies is absolutely explicit about how statistical evidence is to be characterized and measured. We will discuss the general problem of statistical reasoning and the development of a theory for this that is based on being precise about statistical evidence. This will be shown to lead to the resolution of a number of problems.
We study the problem of exact support recovery based on noisy observations and present Refined Least Squares (RLS). Given a set of noisy measurement $$ myvec{y} = myvec{X}myvec{theta}^* + myvec{omega},$$ and $myvec{X} in mathbb{R}^{N times D}$ which is a (known) Gaussian matrix and $myvec{omega} in mathbb{R}^N$ is an (unknown) Gaussian noise vector, our goal is to recover the support of the (unknown) sparse vector $myvec{theta}^* in left{-1,0,1right}^D$. To recover the support of the $myvec{theta}^*$ we use an average of multiple least squares solutions, each computed based on a subset of the full set of equations. The support is estimated by identifying the most significant coefficients of the average least squares solution. We demonstrate that in a wide variety of settings our method outperforms state-of-the-art support recovery algorithms.