ترغب بنشر مسار تعليمي؟ اضغط هنا

Multiscale inference about a density

83   0   0.0 ( 0 )
 نشر من قبل Lutz Duembgen
 تاريخ النشر 2008
  مجال البحث الاحصاء الرياضي
والبحث باللغة English




اسأل ChatGPT حول البحث

We introduce a multiscale test statistic based on local order statistics and spacings that provides simultaneous confidence statements for the existence and location of local increases and decreases of a density or a failure rate. The procedure provides guaranteed finite-sample significance levels, is easy to implement and possesses certain asymptotic optimality and adaptivity properties.



قيم البحث

اقرأ أيضاً

69 - C. L. Winter , D. Nychka 2008
Given a collection of computational models that all estimate values of the same natural process, we compare the performance of the average of the collection to the individual member whose estimates are nearest a given set of observations. Performance is the ability of a model, or average, to reproduce a sequence of observations of the process. We identify a condition that determines if a single model performs better than the average. That result also yields a necessary condition for when the average performs better than any individual model. We also give sharp bounds for the performance of the average on a given interval. Since the observation interval is fixed, performance is evaluated in a vector space, and we can add intuition to our results by explaining them geometrically. We conclude with some comments on directions statistical tests of performance might take.
Inspired by sample splitting and the reusable holdout introduced in the field of differential privacy, we consider selective inference with a randomized response. We discuss two major advantages of using a randomized response for model selection. Fir st, the selectively valid tests are more powerful after randomized selection. Second, it allows consistent estimation and weak convergence of selective inference procedures. Under independent sampling, we prove a selective (or privatized) central limit theorem that transfers procedures valid under asymptotic normality without selection to their corresponding selective counterparts. This allows selective inference in nonparametric settings. Finally, we propose a framework of inference after combining multiple randomized selection procedures. We focus on the classical asymptotic setting, leaving the interesting high-dimensional asymptotic questions for future work.
In many applications it is desirable to infer coarse-grained models from observational data. The observed process often corresponds only to a few selected degrees of freedom of a high-dimensional dynamical system with multiple time scales. In this wo rk we consider the inference problem of identifying an appropriate coarse-grained model from a single time series of a multiscale system. It is known that estimators such as the maximum likelihood estimator or the quadratic variation of the path estimator can be strongly biased in this setting. Here we present a novel parametric inference methodology for problems with linear parameter dependency that does not suffer from this drawback. Furthermore, we demonstrate through a wide spectrum of examples that our methodology can be used to derive appropriate coarse-grained models from time series of partial observations of a multiscale system in an effective and systematic fashion.
We consider the problem of statistical inference for the effective dynamics of multiscale diffusion processes with (at least) two widely separated characteristic time scales. More precisely, we seek to determine parameters in the effective equation d escribing the dynamics on the longer diffusive time scale, i.e. in a homogenization framework. We examine the case where both the drift and the diffusion coefficients in the effective dynamics are space-dependent and depend on multiple unknown parameters. It is known that classical estimators, such as Maximum Likelihood and Quadratic Variation of the Path Estimators, fail to obtain reasonable estimates for parameters in the effective dynamics when based on observations of the underlying multiscale diffusion. We propose a novel algorithm for estimating both the drift and diffusion coefficients in the effective dynamics based on a semi-parametric framework. We demonstrate by means of extensive numerical simulations of a number of selected examples that the algorithm performs well when applied to data from a multiscale diffusion. These examples also illustrate that the algorithm can be used effectively to obtain accurate and unbiased estimates.
Selective inference is a recent research topic that tries to perform valid inference after using the data to select a reasonable statistical model. We propose MAGIC, a new method for selective inference that is general, powerful and tractable. MAGIC is a method for selective inference after solving a convex optimization problem with smooth loss and $ell_1$ penalty. Randomization is incorporated into the optimization problem to boost statistical power. Through reparametrization, MAGIC reduces the problem into a sampling problem with simple constraints. MAGIC applies to many $ell_1$ penalized optimization problem including the Lasso, logistic Lasso and neighborhood selection in graphical models, all of which we consider in this paper.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا