ﻻ يوجد ملخص باللغة العربية
Rank correlations have found many innovative applications in the last decade. In particular, suitable rank correlations have been used for consistent tests of independence between pairs of random variables. Using ranks is especially appealing for continuous data as tests become distribution-free. However, the traditional concept of ranks relies on ordering data and is, thus, tied to univariate observations. As a result, it has long remained unclear how one may construct distribution-free yet consistent tests of independence between random vectors. This is the problem addressed in this paper, in which we lay out a general framework for designing dependence measures that give tests of multivariate independence that are not only consistent and distribution-free but which we also prove to be statistically efficient. Our framework leverages the recently introduced concept of center-outward ranks and signs, a multivariate generalization of traditional ranks, and adopts a common standard form for dependence measures that encompasses many popular examples. In a unified study, we derive a general asymptotic representation of center-outward rank-based test statistics under independence, extending to the multivariate setting the classical H{a}jek asymptotic representation results. This representation permits direct calculation of limiting null distributions and facilitates a local power analysis that provides strong support for the center-outward approach by establishing, for the first time, the nontrivial power of center-outward rank-based tests over root-$n$ neighborhoods within the class of quadratic mean differentiable alternatives.
This paper investigates the problem of testing independence of two random vectors of general dimensions. For this, we give for the first time a distribution-free consistent test. Our approach combines distance covariance with the center-outward ranks
We introduce new estimates and tests of independence in copula models with unknown margins using $phi$-divergences and the duality technique. The asymptotic laws of the estimates and the test statistics are established both when the parameter is an i
A popular approach for testing if two univariate random variables are statistically independent consists of partitioning the sample space into bins, and evaluating a test statistic on the binned data. The partition size matters, and the optimal parti
For some variants of regression models, including partial, measurement error or error-in-variables, latent effects, semi-parametric and otherwise corrupted linear models, the classical parametric tests generally do not perform well. Various modificat
We treat the problem of testing independence between m continuous variables when m can be larger than the available sample size n. We consider three types of test statistics that are constructed as sums or sums of squares of pairwise rank correlation