ترغب بنشر مسار تعليمي؟ اضغط هنا

Limitations on intermittent forecasting

133   0   0.0 ( 0 )
 نشر من قبل Gusztav Morvai
 تاريخ النشر 2007
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Bailey showed that the general pointwise forecasting for stationary and ergodic time series has a negative solution. However, it is known that for Markov chains the problem can be solved. Morvai showed that there is a stopping time sequence ${lambda_n}$ such that $P(X_{lambda_n+1}=1|X_0,...,X_{lambda_n}) $ can be estimated from samples $(X_0,...,X_{lambda_n})$ such that the difference between the conditional probability and the estimate vanishes along these stoppping times for all stationary and ergodic binary time series. We will show it is not possible to estimate the above conditional probability along a stopping time sequence for all stationary and ergodic binary time series in a pointwise sense such that if the time series turns out to be a Markov chain, the predictor will predict eventually for all $n$.



قيم البحث

اقرأ أيضاً

125 - G. Morvai , B. Weiss 2007
Let ${X_n}_{n=0}^{infty}$ be a stationary real-valued time series with unknown distribution. Our goal is to estimate the conditional expectation of $X_{n+1}$ based on the observations $X_i$, $0le ile n$ in a strongly consistent way. Bailey and Ryabko proved that this is not possible even for ergodic binary time series if one estimates at all values of $n$. We propose a very simple algorithm which will make prediction infinitely often at carefully selected stopping times chosen by our rule. We show that under certain conditions our procedure is strongly (pointwise) consistent, and $L_2$ consistent without any condition. An upper bound on the growth of the stopping times is also presented in this paper.
143 - L. Gyorfi , G. Morvai , 2007
This study concerns problems of time-series forecasting under the weakest of assumptions. Related results are surveyed and are points of departure for the developments here, some of which are new and others are new derivations of previous findings. T he contributions in this study are all negative, showing that various plausible prediction problems are unsolvable, or in other cases, are not solvable by predictors which are known to be consistent when mixing conditions hold.
The forecasting problem for a stationary and ergodic binary time series ${X_n}_{n=0}^{infty}$ is to estimate the probability that $X_{n+1}=1$ based on the observations $X_i$, $0le ile n$ without prior knowledge of the distribution of the process ${X_ n}$. It is known that this is not possible if one estimates at all values of $n$. We present a simple procedure which will attempt to make such a prediction infinitely often at carefully selected stopping times chosen by the algorithm. We show that the proposed procedure is consistent under certain conditions, and we estimate the growth rate of the stopping times.
Let ${(X_i,Y_i)}$ be a stationary ergodic time series with $(X,Y)$ values in the product space $R^dbigotimes R .$ This study offers what is believed to be the first strongly consistent (with respect to pointwise, least-squares, and uniform distance) algorithm for inferring $m(x)=E[Y_0|X_0=x]$ under the presumption that $m(x)$ is uniformly Lipschitz continuous. Auto-regression, or forecasting, is an important special case, and as such our work extends the literature of nonparametric, nonlinear forecasting by circumventing customary mixing assumptions. The work is motivated by a time series model in stochastic finance and by perspectives of its contribution to the issues of universal time series estimation.
Trace reconstruction considers the task of recovering an unknown string $x in {0,1}^n$ given a number of independent traces, i.e., subsequences of $x$ obtained by randomly and independently deleting every symbol of $x$ with some probability $p$. The information-theoretic limit of the number of traces needed to recover a string of length $n$ are still unknown. This limit is essentially the same as the number of traces needed to determine, given strings $x$ and $y$ and traces of one of them, which string is the source. The most studied class of algorithms for the worst-case version of the problem are mean-based algorithms. These are a restricted class of distinguishers that only use the mean value of each coordinate on the given samples. In this work we study limitations of mean-based algorithms on strings at small Hamming or edit distance. We show on the one hand that distinguishing strings that are nearby in Hamming distance is easy for such distinguishers. On the other hand, we show that distinguishing strings that are nearby in edit distance is hard for mean-based algorithms. Along the way we also describe a connection to the famous Prouhet-Tarry-Escott (PTE) problem, which shows a barrier to finding explicit hard-to-distinguish strings: namely such strings would imply explicit short solutions to the PTE problem, a well-known difficult problem in number theory. Our techniques rely on complex analysis arguments that involve careful trigonometric estimates, and algebraic techniques that include applications of Descartes rule of signs for polynomials over the reals.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا