ترغب بنشر مسار تعليمي؟ اضغط هنا

On rank estimators in increasing dimensions

190   0   0.0 ( 0 )
 نشر من قبل Fang Han
 تاريخ النشر 2019
  مجال البحث اقتصاد
والبحث باللغة English




اسأل ChatGPT حول البحث

The family of rank estimators, including Hans maximum rank correlation (Han, 1987) as a notable example, has been widely exploited in studying regression problems. For these estimators, although the linear index is introduced for alleviating the impact of dimensionality, the effect of large dimension on inference is rarely studied. This paper fills this gap via studying the statistical properties of a larger family of M-estimators, whose objective functions are formulated as U-processes and may be discontinuous in increasing dimension set-up where the number of parameters, $p_{n}$, in the model is allowed to increase with the sample size, $n$. First, we find that often in estimation, as $p_{n}/nrightarrow 0$, $(p_{n}/n)^{1/2}$ rate of convergence is obtainable. Second, we establish Bahadur-type bounds and study the validity of normal approximation, which we find often requires a much stronger scaling requirement than $p_{n}^{2}/nrightarrow 0.$ Third, we state conditions under which the numerical derivative estimator of asymptotic covariance matrix is consistent, and show that the step size in implementing the covariance estimator has to be adjusted with respect to $p_{n}$. All theoretical results are further backed up by simulation studies.



قيم البحث

اقرأ أيضاً

This paper studies inference in linear models whose parameter of interest is a high-dimensional matrix. We focus on the case where the high-dimensional matrix parameter is well-approximated by a ``spiked low-rank matrix whose rank grows slowly compar ed to its dimensions and whose nonzero singular values diverge to infinity. We show that this framework covers a broad class of models of latent-variables which can accommodate matrix completion problems, factor models, varying coefficient models, principal components analysis with missing data, and heterogeneous treatment effects. For inference, we propose a new ``rotation-debiasing method for product parameters initially estimated using nuclear norm penalization. We present general high-level results under which our procedure provides asymptotically normal estimators. We then present low-level conditions under which we verify the high-level conditions in a treatment effects example.
Robust methods, though ubiquitous in practice, are yet to be fully understood in the context of regularized estimation and high dimensions. Even simple questions become challenging very quickly. For example, classical statistical theory identifies eq uivalence between model-averaged and composite quantile estimation. However, little to nothing is known about such equivalence between methods that encourage sparsity. This paper provides a toolbox to further study robustness in these settings and focuses on prediction. In particular, we study optimally weighted model-averaged as well as composite $l_1$-regularized estimation. Optimal weights are determined by minimizing the asymptotic mean squared error. This approach incorporates the effects of regularization, without the assumption of perfect selection, as is often used in practice. Such weights are then optimal for prediction quality. Through an extensive simulation study, we show that no single method systematically outperforms others. We find, however, that model-averaged and composite quantile estimators often outperform least-squares methods, even in the case of Gaussian model noise. Real data application witnesses the methods practical use through the reconstruction of compressed audio signals.
Recently, Kabaila and Wijethunga assessed the performance of a confidence interval centred on a bootstrap smoothed estimator, with width proportional to an estimator of Efrons delta method approximation to the standard deviation of this estimator. Th ey used a testbed situation consisting of two nested linear regression models, with error variance assumed known, and model selection using a preliminary hypothesis test. This assessment was in terms of coverage and scaled expected length, where the scaling is with respect to the expected length of the usual confidence interval with the same minimum coverage probability. They found that this confidence interval has scaled expected length that (a) has a maximum value that may be much greater than 1 and (b) is greater than a number slightly less than 1 when the simpler model is correct. We therefore ask the following question. For a confidence interval, centred on the bootstrap smoothed estimator, does there exist a formula for its data-based width such that, in this testbed situation, it has the desired minimum coverage and scaled expected length that (a) has a maximum value that is not too much larger than 1 and (b) is substantially less than 1 when the simpler model is correct? Using a recent decision-theoretic performance bound due to Kabaila and Kong, it is shown that the answer to this question is `no for a wide range of scenarios.
This paper considers inference for a function of a parameter vector in a partially identified model with many moment inequalities. This framework allows the number of moment conditions to grow with the sample size, possibly at exponential rates. Our main motivating application is subvector inference, i.e., inference on a single component of the partially identified parameter vector associated with a treatment effect or a policy variable of interest. Our inference method compares a MinMax test statistic (minimum over parameters satisfying $H_0$ and maximum over moment inequalities) against critical values that are based on bootstrap approximations or analytical bounds. We show that this method controls asymptotic size uniformly over a large class of data generating processes despite the partially identified many moment inequality setting. The finite sample analysis allows us to obtain explicit rates of convergence on the size control. Our results are based on combining non-asymptotic approximations and new high-dimensional central limit theorems for the MinMax of the components of random matrices. Unlike the previous literature on functional inference in partially identified models, our results do not rely on weak convergence results based on Donskers class assumptions and, in fact, our test statistic may not even converge in distribution. Our bootstrap approximation requires the choice of a tuning parameter sequence that can avoid the excessive concentration of our test statistic. To this end, we propose an asymptotically valid data-driven method to select this tuning parameter sequence. This method generalizes the selection of tuning parameter sequences to problems outside the Donskers class assumptions and may also be of independent interest. Our procedures based on self-normalized moderate deviation bounds are relatively more conservative but easier to implement.
We consider batch size selection for a general class of multivariate batch means variance estimators, which are computationally viable for high-dimensional Markov chain Monte Carlo simulations. We derive the asymptotic mean squared error for this cla ss of estimators. Further, we propose a parametric technique for estimating optimal batch sizes and discuss practical issues regarding the estimating process. Vector auto-regressive, Bayesian logistic regression, and Bayesian dynamic space-time examples illustrate the quality of the estimation procedure where the proposed optimal batch sizes outperform current batch size selection methods.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا