No Arabic abstract
We propose a novel conditional quantile prediction method based on complete subset averaging (CSA) for quantile regressions. All models under consideration are potentially misspecified and the dimension of regressors goes to infinity as the sample size increases. Since we average over the complete subsets, the number of models is much larger than the usual model averaging method which adopts sophisticated weighting schemes. We propose to use an equal weight but select the proper size of the complete subset based on the leave-one-out cross-validation method. Building upon the theory of Lu and Su (2015), we investigate the large sample properties of CSA and show the asymptotic optimality in the sense of Li (1987). We check the finite sample performance via Monte Carlo simulations and empirical applications.
We propose a two-stage least squares (2SLS) estimator whose first stage is the equal-weighted average over a complete subset with $k$ instruments among $K$ available, which we call the complete subset averaging (CSA) 2SLS. The approximate mean squared error (MSE) is derived as a function of the subset size $k$ by the Nagar (1959) expansion. The subset size is chosen by minimizing the sample counterpart of the approximate MSE. We show that this method achieves the asymptotic optimality among the class of estimators with different subset sizes. To deal with averaging over a growing set of irrelevant instruments, we generalize the approximate MSE to find that the optimal $k$ is larger than otherwise. An extensive simulation experiment shows that the CSA-2SLS estimator outperforms the alternative estimators when instruments are correlated. As an empirical illustration, we estimate the logistic demand function in Berry, Levinsohn, and Pakes (1995) and find the CSA-2SLS estimate is better supported by economic theory than the alternative estimates.
We develop monitoring procedures for cointegrating regressions, testing the null of no breaks against the alternatives that there is either a change in the slope, or a change to non-cointegration. After observing the regression for a calibration sample m, we study a CUSUM-type statistic to detect the presence of change during a monitoring horizon m+1,...,T. Our procedures use a class of boundary functions which depend on a parameter whose value affects the delay in detecting the possible break. Technically, these procedures are based on almost sure limiting theorems whose derivation is not straightforward. We therefore define a monitoring function which - at every point in time - diverges to infinity under the null, and drifts to zero under alternatives. We cast this sequence in a randomised procedure to construct an i.i.d. sequence, which we then employ to define the detector function. Our monitoring procedure rejects the null of no break (when correct) with a small probability, whilst it rejects with probability one over the monitoring horizon in the presence of breaks.
This paper provides a method to construct simultaneous confidence bands for quantile functions and quantile effects in nonlinear network and panel models with unobserved two-way effects, strictly exogenous covariates, and possibly discrete outcome variables. The method is based upon projection of simultaneous confidence bands for distribution functions constructed from fixed effects distribution regression estimators. These fixed effects estimators are debiased to deal with the incidental parameter problem. Under asymptotic sequences where both dimensions of the data set grow at the same rate, the confidence bands for the quantile functions and effects have correct joint coverage in large samples. An empirical application to gravity models of trade illustrates the applicability of the methods to network data.
Datasets from field experiments with covariate-adaptive randomizations (CARs) usually contain extra baseline covariates in addition to the strata indicators. We propose to incorporate these extra covariates via auxiliary regressions in the estimation and inference of unconditional QTEs under CARs. We establish the consistency, limiting distribution, and validity of the multiplier bootstrap of the regression-adjusted QTE estimator. The auxiliary regression may be estimated parametrically, nonparametrically, or via regularization when the data are high-dimensional. Even when the auxiliary regression is misspecified, the proposed bootstrap inferential procedure still achieves the nominal rejection probability in the limit under the null. When the auxiliary regression is correctly specified, the regression-adjusted estimator achieves the minimum asymptotic variance. We also derive the optimal pseudo true values for the potentially misspecified parametric model that minimize the asymptotic variance of the corresponding QTE estimator. We demonstrate the finite sample performance of the new estimation and inferential methods using simulations and provide an empirical application to a well-known dataset in education.
Dynamic model averaging (DMA) combines the forecasts of a large number of dynamic linear models (DLMs) to predict the future value of a time series. The performance of DMA critically depends on the appropriate choice of two forgetting factors. The first of these controls the speed of adaptation of the coefficient vector of each DLM, while the second enables time variation in the model averaging stage. In this paper we develop a novel, adaptive dynamic model averaging (ADMA) methodology. The proposed methodology employs a stochastic optimisation algorithm that sequentially updates the forgetting factor of each DLM, and uses a state-of-the-art non-parametric model combination algorithm from the prediction with expert advice literature, which offers finite-time performance guarantees. An empirical application to quarterly UK house price data suggests that ADMA produces more accurate forecasts than the benchmark autoregressive model, as well as competing DMA specifications.