ﻻ يوجد ملخص باللغة العربية
Single index models provide an effective dimension reduction tool in regression, especially for high dimensional data, by projecting a general multivariate predictor onto a direction vector. We propose a novel single-index model for regression models where metric space-valued random object responses are coupled with multivariate Euclidean predictors. The responses in this regression model include complex, non-Euclidean data, including covariance matrices, graph Laplacians of networks, and univariate probability distribution functions among other complex objects that lie in abstract metric spaces. Frechet regression has provided an approach for modeling the conditional mean of such random objects given multivariate Euclidean vectors, but it does not provide for regression parameters such as slopes or intercepts, since the metric space-valued responses are not amenable to linear operations. We show here that for the case of multivariate Euclidean predictors, the parameters that define a single index and associated projection vector can be used to substitute for the inherent absence of parameters in Frechet regression. Specifically, we derive the asymptotic consistency of suitable estimates of these parameters subject to an identifiability condition. Consistent estimation of the link function of the single index Frechet regression model is obtained through local Frechet regression. We demonstrate the finite sample performance of estimation for the proposed single index Frechet regression model through simulation studies, including the special cases of probability distributions and graph adjacency matrices. The method is also illustrated for resting-state functional Magnetic Resonance Imaging (fMRI) data from the ADNI study.
With the availability of more non-euclidean data objects, statisticians are faced with the task of developing appropriate statistical methods. For regression models in which the predictors lie in $R^p$ and the response variables are situated in a met
This paper investigates the high-dimensional linear regression with highly correlated covariates. In this setup, the traditional sparsity assumption on the regression coefficients often fails to hold, and consequently many model selection procedures
This paper introduces and analyzes a stochastic search method for parameter estimation in linear regression models in the spirit of Beran and Millar (1987). The idea is to generate a random finite subset of a parameter space which will automatically
$ell_1$-penalized quantile regression is widely used for analyzing high-dimensional data with heterogeneity. It is now recognized that the $ell_1$-penalty introduces non-negligible estimation bias, while a proper use of concave regularization may lea
Labeling patients in electronic health records with respect to their statuses of having a disease or condition, i.e. case or control statuses, has increasingly relied on prediction models using high-dimensional variables derived from structured and u