No Arabic abstract
Conditional autoregressive (CAR) models are commonly used to capture spatial correlation in areal unit data, and are typically specified as a prior distribution for a set of random effects, as part of a hierarchical Bayesian model. The spatial correlation structure induced by these models is determined by geographical adjacency, so that two areas have correlated random effects if they share a common border. However, this correlation structure is too simplistic for real data, which are instead likely to include sub-regions of strong correlation as well as locations at which the response exhibits a step-change. Therefore this paper proposes an extension to CAR priors, which can capture such localised spatial correlation. The proposed approach takes the form of an iterative algorithm, which sequentially updates the spatial correlation structure in the data as well as estimating the remaining model parameters. The efficacy of the approach is assessed by simulation, and its utility is illustrated in a disease mapping context, using data on respiratory disease risk in Greater Glasgow, Scotland.
This paper is devoted to adaptive long autoregressive spectral analysis when (i) very few data are available, (ii) information does exist beforehand concerning the spectral smoothness and time continuity of the analyzed signals. The contribution is founded on two papers by Kitagawa and Gersch. The first one deals with spectral smoothness, in the regularization framework, while the second one is devoted to time continuity, in the Kalman formalism. The present paper proposes an original synthesis of the two contributions: a new regularized criterion is introduced that takes both information into account. The criterion is efficiently optimized by a Kalman smoother. One of the major features of the method is that it is entirely unsupervised: the problem of automatically adjusting the hyperparameters that balance data-based versus prior-based information is solved by maximum likelihood. The improvement is quantified in the field of meteorological radar.
High-dimensional generative models have many applications including image compression, multimedia generation, anomaly detection and data completion. State-of-the-art estimators for natural images are autoregressive, decomposing the joint distribution over pixels into a product of conditionals parameterized by a deep neural network, e.g. a convolutional neural network such as the PixelCNN. However, PixelCNNs only model a single decomposition of the joint, and only a single generation order is efficient. For tasks such as image completion, these models are unable to use much of the observed context. To generate data in arbitrary orders, we introduce LMConv: a simple modification to the standard 2D convolution that allows arbitrary masks to be applied to the weights at each location in the image. Using LMConv, we learn an ensemble of distribution estimators that share parameters but differ in generation order, achieving improved performance on whole-image density estimation (2.89 bpd on unconditional CIFAR10), as well as globally coherent image completions. Our code is available at https://ajayjain.github.io/lmconv.
We present a graph neural network model for solving graph-to-graph learning problems. Most deep learning on graphs considers ``simple problems such as graph classification or regressing real-valued graph properties. For such tasks, the main requirement for intermediate representations of the data is to maintain the structure needed for output, i.e., keeping classes separated or maintaining the order indicated by the regressor. However, a number of learning tasks, such as regressing graph-valued output, generative models, or graph autoencoders, aim to predict a graph-structured output. In order to successfully do this, the learned representations need to preserve far more structure. We present a conditional auto-regressive model for graph-to-graph learning and illustrate its representational capabilities via experiments on challenging subgraph predictions from graph algorithmics; as a graph autoencoder for reconstruction and visualization; and on pretraining representations that allow graph classification with limited labeled data.
In this paper, I construct a new test of conditional moment inequalities, which is based on studentized kernel estimates of moment functions with many different values of the bandwidth parameter. The test automatically adapts to the unknown smoothness of moment functions and has uniformly correct asymptotic size. The test has high power in a large class of models with conditional moment inequalities. Some existing tests have nontrivial power against n^{-1/2}-local alternatives in a certain class of these models whereas my method only allows for nontrivial testing against (n/log n)^{-1/2}-local alternatives in this class. There exist, however, other classes of models with conditional moment inequalities where the mentioned tests have much lower power in comparison with the test developed in this paper.
Rescaled spike and slab models are a new Bayesian variable selection method for linear regression models. In high dimensional orthogonal settings such models have been shown to possess optimal model selection properties. We review background theory and discuss applications of rescaled spike and slab models to prediction problems involving orthogonal polynomials. We first consider global smoothing and discuss potential weaknesses. Some of these deficiencies are remedied by using local regression. The local regression approach relies on an intimate connection between local weighted regression and weighted generalized ridge regression. An important implication is that one can trace the effective degrees of freedom of a curve as a way to visualize and classify curvature. Several motivating examples are presented.