ترغب بنشر مسار تعليمي؟ اضغط هنا

Consistency of $ell _{1}$ Penalized Negative Binomial Regressions

74   0   0.0 ( 0 )
 نشر من قبل Fang Xie
 تاريخ النشر 2020
  مجال البحث الاحصاء الرياضي
والبحث باللغة English




اسأل ChatGPT حول البحث

We prove the consistency of the $ell_1$ penalized negative binomial regression (NBR). A real data application about German health care demand shows that the $ell_1$ penalized NBR produces a more concise but more accurate model, comparing to the classical NBR.



قيم البحث

اقرأ أيضاً

282 - Jian Huang , Huiliang Xie 2007
We study the asymptotic properties of the SCAD-penalized least squares estimator in sparse, high-dimensional, linear regression models when the number of covariates may increase with the sample size. We are particularly interested in the use of this estimator for simultaneous variable selection and estimation. We show that under appropriate conditions, the SCAD-penalized least squares estimator is consistent for variable selection and that the estimators of nonzero coefficients have the same asymptotic distribution as they would have if the zero coefficients were known in advance. Simulation studies indicate that this estimator performs well in terms of variable selection and estimation.
Let $b(x)$ be the probability that a sum of independent Bernoulli random variables with parameters $p_1, p_2, p_3, ldots in [0,1)$ equals $x$, where $lambda := p_1 + p_2 + p_3 + cdots$ is finite. We prove two inequalities for the maximal ratio $b(x)/ pi_lambda(x)$, where $pi_lambda$ is the weight function of the Poisson distribution with parameter $lambda$.
We prove uniform consistency of Random Survival Forests (RSF), a newly introduced forest ensemble learner for analysis of right-censored survival data. Consistency is proven under general splitting rules, bootstrapping, and random selection of variab les--that is, under true implementation of the methodology. A key assumption made is that all variables are factors. Although this assumes that the feature space has finite cardinality, in practice the space can be a extremely large--indeed, current computational procedures do not properly deal with this setting. An indirect consequence of this work is the introduction of new computational methodology for dealing with factors with unlimited number of labels.
This paper is to prove the asymptotic normality of a statistic for detecting the existence of heteroscedasticity for linear regression models without assuming randomness of covariates when the sample size $n$ tends to infinity and the number of covar iates $p$ is either fixed or tends to infinity. Moreover our approach indicates that its asymptotic normality holds even without homoscedasticity.
Two existing approaches to functional principal components analysis (FPCA) are due to Rice and Silverman (1991) and Silverman (1996), both based on maximizing variance but introducing penalization in different ways. In this article we propose an alte rnative approach to FPCA using penalized rank one approximation to the data matrix. Our contributions are four-fold: (1) by considering invariance under scale transformation of the measurements, the new formulation sheds light on how regularization should be performed for FPCA and suggests an efficient power algorithm for computation; (2) it naturally incorporates spline smoothing of discretized functional data; (3) the connection with smoothing splines also facilitates construction of cross-validation or generalized cross-validation criteria for smoothing parameter selection that allows efficient computation; (4) different smoothing parameters are permitted for different FPCs. The methodology is illustrated with a real data example and a simulation.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا