No Arabic abstract
In this paper, we develop uniform inference methods for the conditional mode based on quantile regression. Specifically, we propose to estimate the conditional mode by minimizing the derivative of the estimated conditional quantile function defined by smoothing the linear quantile regression estimator, and develop two bootstrap methods, a novel pivotal bootstrap and the nonparametric bootstrap, for our conditional mode estimator. Building on high-dimensional Gaussian approximation techniques, we establish the validity of simultaneous confidence rectangles constructed from the two bootstrap methods for the conditional mode. We also extend the preceding analysis to the case where the dimension of the covariate vector is increasing with the sample size. Finally, we conduct simulation experiments and a real data analysis using U.S. wage data to demonstrate the finite sample performance of our inference method.
In the fields of clinical trials, biomedical surveys, marketing, banking, with dichotomous response variable, the logistic regression is considered as an alternative convenient approach to linear regression. In this paper, we develop a novel bootstrap technique based on perturbation resampling method for approximating the distribution of the maximum likelihood estimator (MLE) of the regression parameter vector. We establish second order correctness of the proposed bootstrap method after proper studentization and smoothing. It is shown that inferences drawn based on the proposed bootstrap method are more accurate compared to that based on asymptotic normality. The main challenge in establishing second order correctness remains in the fact that the response variable being binary, the resulting MLE has a lattice structure. We show the direct bootstrapping approach fails even after studentization. We adopt smoothing technique developed in Lahiri (1993) to ensure that the smoothed studentized version of the MLE has a density. Similar smoothing strategy is employed to the bootstrap version also to achieve second order correct approximation.
Selection of important covariates and to drop the unimportant ones from a high-dimensional regression model is a long standing problem and hence have received lots of attention in the last two decades. After selecting the correct model, it is also important to properly estimate the existing parameters corresponding to important covariates. In this spirit, Fan and Li (2001) proposed Oracle property as a desired feature of a variable selection method. Oracle property has two parts; one is the variable selection consistency (VSC) and the other one is the asymptotic normality. Keeping VSC fixed and making the other part stronger, Fan and Lv (2008) introduced the strong oracle property. In this paper, we consider different penalized regression techniques which are VSC and classify those based on oracle and strong oracle property. We show that both the residual and the perturbation bootstrap methods are second order correct for any penalized estimator irrespective of its class. Most interesting of all is the Lasso, introduced by Tibshirani (1996). Although Lasso is VSC, it is not asymptotically normal and hence fails to satisfy the oracle property.
Penalization procedures often suffer from their dependence on multiplying factors, whose optimal values are either unknown or hard to estimate from the data. We propose a completely data-driven calibration algorithm for this parameter in the least-squares regression framework, without assuming a particular shape for the penalty. Our algorithm relies on the concept of minimal penalty, recently introduced by Birge and Massart (2007) in the context of penalized least squares for Gaussian homoscedastic regression. On the positive side, the minimal penalty can be evaluated from the data themselves, leading to a data-driven estimation of an optimal penalty which can be used in practice; on the negative side, their approach heavily relies on the homoscedastic Gaussian nature of their stochastic framework. The purpose of this paper is twofold: stating a more general heuristics for designing a data-driven penalty (the slope heuristics) and proving that it works for penalized least-squares regression with a random design, even for heteroscedastic non-Gaussian data. For technical reasons, some exact mathematical results will be proved only for regressogram bin-width selection. This is at least a first step towards further results, since the approach and the method that we use are indeed general.
We apply Gaussian process (GP) regression, which provides a powerful non-parametric probabilistic method of relating inputs to outputs, to survival data consisting of time-to-event and covariate measurements. In this context, the covariates are regarded as the `inputs and the event times are the `outputs. This allows for highly flexible inference of non-linear relationships between covariates and event times. Many existing methods, such as the ubiquitous Cox proportional hazards model, focus primarily on the hazard rate which is typically assumed to take some parametric or semi-parametric form. Our proposed model belongs to the class of accelerated failure time models where we focus on directly characterising the relationship between covariates and event times without any explicit assumptions on what form the hazard rates take. It is straightforward to include various types and combinations of censored and truncated observations. We apply our approach to both simulated and experimental data. We then apply multiple output GP regression, which can handle multiple potentially correlated outputs for each input, to competing risks survival data where multiple event types can occur. By tuning one of the model parameters we can control the extent to which the multiple outputs (the time-to-event for each risk) are dependent thus allowing the specification of correlated risks. Simulation studies suggest that in some cases assuming dependence can lead to more accurate predictions.
From an optimizers perspective, achieving the global optimum for a general nonconvex problem is often provably NP-hard using the classical worst-case analysis. In the case of Coxs proportional hazards model, by taking its statistical model structures into account, we identify local strong convexity near the global optimum, motivated by which we propose to use two convex programs to optimize the folded-concave penalized Coxs proportional hazards regression. Theoretically, we investigate the statistical and computational tradeoffs of the proposed algorithm and establish the strong oracle property of the resulting estimators. Numerical studies and real data analysis lend further support to our algorithm and theory.