ترغب بنشر مسار تعليمي؟ اضغط هنا

Least Product Relative Error Estimation

162   0   0.0 ( 0 )
 نشر من قبل Zhanfeng Wang
 تاريخ النشر 2013
  مجال البحث الاحصاء الرياضي
والبحث باللغة English




اسأل ChatGPT حول البحث

A least product relative error criterion is proposed for multiplicative regression models. It is invariant under scale transformation of the outcome and covariates. In addition, the objective function is smooth and convex, resulting in a simple and uniquely defined estimator of the regression parameter. It is shown that the estimator is asymptotically normal and that the simple plugging-in variance estimation is valid. Simulation results confirm that the proposed method performs well. An application to body fat calculation is presented to illustrate the new method.



قيم البحث

اقرأ أيضاً

A product relative error estimation method for single index regression model is proposed as an alternative to absolute error methods, such as the least square estimation and the least absolute deviation estimation. It is scale invariant for outcome a nd covariates in the model. Regression coefficients are estimated via a two-stage procedure and their statistical properties such as consistency and normality are studied. Numerical studies including simulation and a body fat example show that the proposed method performs well.
Relative error approaches are more of concern compared to absolute error ones such as the least square and least absolute deviation, when it needs scale invariant of output variable, for example with analyzing stock and survival data. An h-relative e rror estimation method via the h-likelihood is developed to avoid heavy and intractable integration for a multiplicative regression model with random effect. Statistical properties of the parameters and random effect in the model are studied. To estimate the parameters, we propose an h-relative error computation procedure. Numerical studies including simulation and real examples show the proposed method performs well.
87 - Ben Boukai , Yue Zhang 2018
We consider a resampling scheme for parameters estimates in nonlinear regression models. We provide an estimation procedure which recycles, via random weighting, the relevant parameters estimates to construct consistent estimates of the sampling dist ribution of the various estimates. We establish the asymptotic normality of the resampled estimates and demonstrate the applicability of the recycling approach in a small simulation study and via example.
Model uncertainty quantification is an essential component of effective data assimilation. Model errors associated with sub-grid scale processes are often represented through stochastic parameterizations of the unresolved process. Many existing Stoch astic Parameterization schemes are only applicable when knowledge of the true sub-grid scale process or full observations of the coarse scale process are available, which is typically not the case in real applications. We present a methodology for estimating the statistics of sub-grid scale processes for the more realistic case that only partial observations of the coarse scale process are available. Model error realizations are estimated over a training period by minimizing their conditional sum of squared deviations given some informative covariates (e.g. state of the system), constrained by available observations and assuming that the observation errors are smaller than the model errors. From these realizations a conditional probability distribution of additive model errors given these covariates is obtained, allowing for complex non-Gaussian error structures. Random draws from this density are then used in actual ensemble data assimilation experiments. We demonstrate the efficacy of the approach through numerical experiments with the multi-scale Lorenz 96 system using both small and large time scale separations between slow (coarse scale) and fast (fine scale) variables. The resulting error estimates and forecasts obtained with this new method are superior to those from two existing methods.
We consider parameter estimation of ordinary differential equation (ODE) models from noisy observations. For this problem, one conventional approach is to fit numerical solutions (e.g., Euler, Runge--Kutta) of ODEs to data. However, such a method doe s not account for the discretization error in numerical solutions and has limited estimation accuracy. In this study, we develop an estimation method that quantifies the discretization error based on data. The key idea is to model the discretization error as random variables and estimate their variance simultaneously with the ODE parameter. The proposed method has the form of iteratively reweighted least squares, where the discretization error variance is updated with the isotonic regression algorithm and the ODE parameter is updated by solving a weighted least squares problem using the adjoint system. Experimental results demonstrate that the proposed method attains robust estimation with at least comparable accuracy to the conventional method by successfully quantifying the reliability of the numerical solutions.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا