ترغب بنشر مسار تعليمي؟ اضغط هنا

Law of the Iterated Logarithm and Model Selection Consistency for GLMs with Independent and Dependent Responses

330   0   0.0 ( 0 )
 نشر من قبل Huiming Zhang
 تاريخ النشر 2019
  مجال البحث
والبحث باللغة English




اسأل ChatGPT حول البحث

We study the law of the iterated logarithm (LIL) for the maximum likelihood estimation of the parameters (as a convex optimization problem) in the generalized linear models with independent or weakly dependent ($rho$-mixing, $m$-dependent) responses under mild conditions. The LIL is useful to derive the asymptotic bounds for the discrepancy between the empirical process of the log-likelihood function and the true log-likelihood. As the application of the LIL, the strong consistency of some penalized likelihood based model selection criteria can be shown. Under some regularity conditions, the model selection criterion will be helpful to select the simplest correct model almost surely when the penalty term increases with model dimension and the penalty term has an order higher than $O({rm{loglog}}n)$ but lower than $O(n)$. Simulation studies are implemented to verify the selection consistency of BIC.



قيم البحث

اقرأ أيضاً

127 - Lutz Duembgen , Jon A. Wellner , 2015
In this note we prove the following law of the iterated logarithm for the Grenander estimator of a monotone decreasing density: If $f(t_0) > 0$, $f(t_0) < 0$, and $f$ is continuous in a neighborhood of $t_0$, then begin{eqnarray*} limsup_{nrightarrow infty} left ( frac{n}{2log log n} right )^{1/3} ( widehat{f}_n (t_0 ) - f(t_0) ) = left| f(t_0) f(t_0)/2 right|^{1/3} 2M end{eqnarray*} almost surely where $ M equiv sup_{g in {cal G}} T_g = (3/4)^{1/3}$ and $ T_g equiv mbox{argmax}_u { g(u) - u^2 } $; here ${cal G}$ is the two-sided Strassen limit set on $R$. The proof relies on laws of the iterated logarithm for local empirical processes, Groenebooms switching relation, and properties of Strassens limit set analogous to distributional properties of Brownian motion.
We present a general law of the iterated logarithm for stochastic processes on the open unit interval having subexponential tails in a locally uniform fashion. It applies to standard Brownian bridge but also to suitably standardized empirical distrib ution functions. This leads to new goodness-of-fit tests and confidence bands which refine the procedures of Berk and Jones (1979) and Owen (1995). Roughly speaking, the high power and accuracy of the latter procedures in the tail regions of distributions are essentially preserved while gaining considerably in the central region.
We prove the consistency of the Power-Law Fit PLFit method proposed by Clauset et al.(2009) to estimate the power-law exponent in data coming from a distribution function with regularly-varying tail. In the complex systems community, PLFit has emerge d as the method of choice to estimate the power-law exponent. Yet, its mathematical properties are still poorly understood. The difficulty in PLFit is that it is a minimum-distance estimator. It first chooses a threshold that minimizes the Kolmogorov-Smirnov distance between the data points larger than the threshold and the Pareto tail, and then applies the Hill estimator to this restricted data. Since the number of order statistics used is random, the general theory of consistency of power-law exponents from extreme value theory does not apply. Our proof consists in first showing that the Hill estimator is consistent for general intermediate sequences for the number of order statistics used, even when that number is random. Here, we call a sequence intermediate when it grows to infinity, while remaining much smaller than the sample size. The second, and most involved, step is to prove that the optimizer in PLFit is with high probability an intermediate sequence, unless the distribution has a Pareto tail above a certain value. For the latter special case, we give a separate proof.
200 - Tepmony Sim 2021
The class of observation-driven models (ODMs) includes many models of non-linear time series which, in a fashion similar to, yet different from, hidden Markov models (HMMs), involve hidden variables. Interestingly, in contrast to most HMMs, ODMs enjo y likelihoods that can be computed exactly with computational complexity of the same order as the number of observations, making maximum likelihood estimation the privileged approach for statistical inference for these models. A celebrated example of general order ODMs is the GARCH$(p,q)$ model, for which ergodicity and inference has been studied extensively. However little is known on more general models, in particular integer-valued ones, such as the log-linear Poisson GARCH or the NBIN-GARCH of order $(p,q)$ about which most of the existing results seem restricted to the case $p=q=1$. Here we fill this gap and derive ergodicity conditions for general ODMs. The consistency and the asymptotic normality of the maximum likelihood estimator (MLE) can then be derived using the method already developed for first order ODMs.
164 - Chunlin Wang 2008
In this paper, we study the asymptotic normality of the conditional maximum likelihood (ML) estimators for the truncated regression model and the Tobit model. We show that under the general setting assumed in his book, the conjectures made by Hayashi (2000) footnote{see page 516, and page 520 of Hayashi (2000).} about the asymptotic normality of the conditional ML estimators for both models are true, namely, a sufficient condition is the nonsingularity of $mathbf{x_tx_t}$.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا