ﻻ يوجد ملخص باللغة العربية
In this paper we consider the problem of grouped variable selection in high-dimensional regression using $ell_1-ell_q$ regularization ($1leq q leq infty$), which can be viewed as a natural generalization of the $ell_1-ell_2$ regularization (the group Lasso). The key condition is that the dimensionality $p_n$ can increase much faster than the sample size $n$, i.e. $p_n gg n$ (in our case $p_n$ is the number of groups), but the number of relevant groups is small. The main conclusion is that many good properties from $ell_1-$regularization (Lasso) naturally carry on to the $ell_1-ell_q$ cases ($1 leq q leq infty$), even if the number of variables within each group also increases with the sample size. With fixed design, we show that the whole family of estimators are both estimation consistent and variable selection consistent under different conditions. We also show the persistency result with random design under a much weaker condition. These results provide a unified treatment for the whole family of estimators ranging from $q=1$ (Lasso) to $q=infty$ (iCAP), with $q=2$ (group Lasso)as a special case. When there is no group structure available, all the analysis reduces to the current results of the Lasso estimator ($q=1$).
This paper develops a novel stochastic tree ensemble method for nonlinear regression, which we refer to as XBART, short for Accelerated Bayesian Additive Regression Trees. By combining regularization and stochastic search strategies from Bayesian mod
Online forecasting under a changing environment has been a problem of increasing importance in many real-world applications. In this paper, we consider the meta-algorithm presented in citet{zhang2017dynamic} combined with different subroutines. We sh
Single index models provide an effective dimension reduction tool in regression, especially for high dimensional data, by projecting a general multivariate predictor onto a direction vector. We propose a novel single-index model for regression models
This paper investigates the high-dimensional linear regression with highly correlated covariates. In this setup, the traditional sparsity assumption on the regression coefficients often fails to hold, and consequently many model selection procedures
A continuous-time nonlinear regression model with Levy-driven linear noise process is considered. Sufficient conditions of consistency and asymptotic normality of the Whittle estimator for the parameter of the noise spectral density are obtained in the paper.