ترغب بنشر مسار تعليمي؟ اضغط هنا

Change Acceleration and Detection

76   0   0.0 ( 0 )
 نشر من قبل Yanglei Song
 تاريخ النشر 2017
  مجال البحث الاحصاء الرياضي
والبحث باللغة English




اسأل ChatGPT حول البحث

A novel sequential change detection problem is proposed, in which the change should be not only detected but also accelerated. Specifically, it is assumed that the sequentially collected observations are responses to treatments selected in real time. The assigned treatments not only determine the pre-change and post-change distributions of the responses, but also influence when the change happens. The problem is to find a treatment assignment rule and a stopping rule that minimize the expected total number of observations subject to a user-specified bound on the false alarm probability. The optimal solution to this problem is obtained under a general Markovian change-point model. Moreover, an alternative procedure is proposed, whose applicability is not restricted to Markovian change-point models and whose design requires minimal computation. For a large class of change-point models, the proposed procedure is shown to achieve the optimal performance in an asymptotic sense. Finally, its performance is found in two simulation studies to be close to the optimal, uniformly with respect to the error probability.



قيم البحث

اقرأ أيضاً

The aim of online monitoring is to issue an alarm as soon as there is significant evidence in the collected observations to suggest that the underlying data generating mechanism has changed. This work is concerned with open-end, nonparametric procedu res that can be interpreted as statistical tests. The proposed monitoring schemes consist of computing the so-called retrospective CUSUM statistic (or minor variations thereof) after the arrival of each new observation. After proposing suitable threshold functions for the chosen detectors, the asymptotic validity of the procedures is investigated in the special case of monitoring for changes in the mean, both under the null hypothesis of stationarity and relevant alternatives. To carry out the sequential tests in practice, an approach based on an asymptotic regression model is used to estimate high quantiles of relevant limiting distributions. Monte Carlo experiments demonstrate the good finite-sample behavior of the proposed monitoring schemes and suggest that they are superior to existing competitors as long as changes do not occur at the very beginning of the monitoring. Extensions to statistics exhibiting an asymptotic mean-like behavior are briefly discussed. Finally, the application of the derived sequential change-point detection tests is succinctly illustrated on temperature anomaly data.
130 - Xiao Fang , David Siegmund 2020
We study the maximum score statistic to detect and estimate local signals in the form of change-points in the level, slope, or other property of a sequence of observations, and to segment the sequence when there appear to be multiple changes. We find that when observations are serially dependent, the change-points can lead to upwardly biased estimates of autocorrelations, resulting in a sometimes serious loss of power. Examples involving temperature variations, the level of atmospheric greenhouse gases, suicide rates and daily incidence of COVID-19 illustrate the general theory.
74 - Shanshan Cao , Yao Xie 2016
From a sequence of similarity networks, with edges representing certain similarity measures between nodes, we are interested in detecting a change-point which changes the statistical property of the networks. After the change, a subset of anomalous n odes which compares dissimilarly with the normal nodes. We study a simple sequential change detection procedure based on node-wise average similarity measures, and study its theoretical property. Simulation and real-data examples demonstrate such a simply stopping procedure has reasonably good performance. We further discuss the faulty sensor isolation (estimating anomalous nodes) using community detection.
78 - Xinran Li , Peng Ding 2019
Randomization is a basis for the statistical inference of treatment effects without strong assumptions on the outcome-generating process. Appropriately using covariates further yields more precise estimators in randomized experiments. R. A. Fisher su ggested blocking on discrete covariates in the design stage or conducting analysis of covariance (ANCOVA) in the analysis stage. We can embed blocking into a wider class of experimental design called rerandomization, and extend the classical ANCOVA to more general regression adjustment. Rerandomization trumps complete randomization in the design stage, and regression adjustment trumps the simple difference-in-means estimator in the analysis stage. It is then intuitive to use both rerandomization and regression adjustment. Under the randomization-inference framework, we establish a unified theory allowing the designer and analyzer to have access to different sets of covariates. We find that asymptotically (a) for any given estimator with or without regression adjustment, rerandomization never hurts either the sampling precision or the estimated precision, and (b) for any given design with or without rerandomization, our regression-adjusted estimator never hurts the estimated precision. Therefore, combining rerandomization and regression adjustment yields better coverage properties and thus improves statistical inference. To theoretically quantify these statements, we discuss optimal regression-adjusted estimators in terms of the sampling precision and the estimated precision, and then measure the additional gains of the designer and the analyzer. We finally suggest using rerandomization in the design and regression adjustment in the analysis followed by the Huber--White robust standard error.
The inferential model (IM) framework produces data-dependent, non-additive degrees of belief about the unknown parameter that are provably valid. The validity property guarantees, among other things, that inference procedures derived from the IM cont rol frequentist error rates at the nominal level. A technical complication is that IMs are built on a relatively unfamiliar theory of random sets. Here we develop an alternative -- and practically equivalent -- formulation, based on a theory of possibility measures, which is simpler in many respects. This new perspective also sheds light on the relationship between IMs and Fishers fiducial inference, as well as on the construction of optimal IMs.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا