ﻻ يوجد ملخص باللغة العربية
High-dimensional variable selection is an important issue in many scientific fields, such as genomics. In this paper, we develop a sure independence feature screening pro- cedure based on kernel canonical correlation analysis (KCCA-SIS, for short). KCCA- SIS is easy to be implemented and applied. Compared to the sure independence screen- ing procedure based on the Pearson correlation (SIS, for short) developed by Fan and Lv [2008], KCCA-SIS can handle nonlinear dependencies among variables. Compared to the sure independence screening procedure based on the distance correlation (DC- SIS, for short) proposed by Li et al. [2012], KCCA-SIS is scale free, distribution free and has better approximation results based on the universal characteristic of Gaussian Kernel (Micchelli et al. [2006]). KCCA-SIS is more general than SIS and DC-SIS in the sense that SIS and DC-SIS correspond to certain choice of kernels. Compared to supremum of Hilbert Schmidt independence criterion-Sure independence screening (sup-HSIC-SIS, for short) developed by Balasubramanian et al. [2013], KCCA-SIS is scale free removing the marginal variation of features and response variables. No model assumption is needed between response and predictors to apply KCCA-SIS and it can be used in ultrahigh dimensional data analysis. Similar to DC-SIS and sup- HSIC-SIS, KCCA-SIS can also be used directly to screen grouped predictors and for multivariate response variables. We show that KCCA-SIS has the sure screening prop- erty, and has better performance through simulation studies. We applied KCCA-SIS to study Autism genes in a spatiotemporal gene expression dataset for human brain development, and obtained better results based on gene ontology enrichment analysis comparing to the other existing methods.
This paper proposes a canonical-correlation-based filter method for feature selection. The sum of squared canonical correlation coefficients is adopted as the feature ranking criterion. The proposed method boosts the computational speed of the rankin
Variable selection in high-dimensional space characterizes many contemporary problems in scientific discovery and decision making. Many frequently-used techniques are based on independence screening; examples include correlation ranking (Fan and Lv,
Canonical correlation analysis (CCA) is a classical and important multivariate technique for exploring the relationship between two sets of continuous variables. CCA has applications in many fields, such as genomics and neuroimaging. It can extract m
Classical canonical correlation analysis (CCA) requires matrices to be low dimensional, i.e. the number of features cannot exceed the sample size. Recent developments in CCA have mainly focused on the high-dimensional setting, where the number of fea
Canonical correlation analysis investigates linear relationships between two sets of variables, but often works poorly on modern data sets due to high-dimensionality and mixed data types such as continuous, binary and zero-inflated. To overcome these