ترغب بنشر مسار تعليمي؟ اضغط هنا

Structural causal models for macro-variables in time-series

256   0   0.0 ( 0 )
 نشر من قبل Dominik Janzing
 تاريخ النشر 2018
  مجال البحث الاحصاء الرياضي
والبحث باللغة English




اسأل ChatGPT حول البحث

We consider a bivariate time series $(X_t,Y_t)$ that is given by a simple linear autoregressive model. Assuming that the equations describing each variable as a linear combination of past values are considered structural equations, there is a clear meaning of how intervening on one particular $X_t$ influences $Y_{t}$ at later times $t>t$. In the present work, we describe conditions under which one can define a causal model between variables that are coarse-grained in time, thus admitting statements like `setting $X$ to $x$ changes $Y$ in a certain way without referring to specific time instances. We show that particularly simple statements follow in the frequency domain, thus providing meaning to interventions on frequencies.

قيم البحث

اقرأ أيضاً

In many applications it is desirable to infer coarse-grained models from observational data. The observed process often corresponds only to a few selected degrees of freedom of a high-dimensional dynamical system with multiple time scales. In this wo rk we consider the inference problem of identifying an appropriate coarse-grained model from a single time series of a multiscale system. It is known that estimators such as the maximum likelihood estimator or the quadratic variation of the path estimator can be strongly biased in this setting. Here we present a novel parametric inference methodology for problems with linear parameter dependency that does not suffer from this drawback. Furthermore, we demonstrate through a wide spectrum of examples that our methodology can be used to derive appropriate coarse-grained models from time series of partial observations of a multiscale system in an effective and systematic fashion.
Causal models with unobserved variables impose nontrivial constraints on the distributions over the observed variables. When a common cause of two variables is unobserved, it is impossible to uncover the causal relation between them without making ad ditional assumptions about the model. In this work, we consider causal models with a promise that unobserved variables have known cardinalities. We derive inequality constraints implied by d-separation in such models. Moreover, we explore the possibility of leveraging this result to study causal influence in models that involve quantum systems.
We consider the problem of finding confidence intervals for the risk of forecasting the future of a stationary, ergodic stochastic process, using a model estimated from the past of the process. We show that a bootstrap procedure provides valid confid ence intervals for the risk, when the data source is sufficiently mixing, and the loss function and the estimator are suitably smooth. Autoregressive (AR(d)) models estimated by least squares obey the necessary regularity conditions, even when mis-specified, and simulations show that the finite- sample coverage of our bounds quickly converges to the theoretical, asymptotic level. As an intermediate step, we derive sufficient conditions for asymptotic independence between empirical distribution functions formed by splitting a realization of a stochastic process, of independent interest.
Prediction for high dimensional time series is a challenging task due to the curse of dimensionality problem. Classical parametric models like ARIMA or VAR require strong modeling assumptions and time stationarity and are often overparametrized. This paper offers a new flexible approach using recent ideas of manifold learning. The considered model includes linear models such as the central subspace model and ARIMA as particular cases. The proposed procedure combines manifold denoising techniques with a simple nonparametric prediction by local averaging. The resulting procedure demonstrates a very reasonable performance for real-life econometric time series. We also provide a theoretical justification of the manifold estimation procedure.
We study high-dimensional linear models with error-in-variables. Such models are motivated by various applications in econometrics, finance and genetics. These models are challenging because of the need to account for measurement errors to avoid non- vanishing biases in addition to handle the high dimensionality of the parameters. A recent growing literature has proposed various estimators that achieve good rates of convergence. Our main contribution complements this literature with the construction of simultaneous confidence regions for the parameters of interest in such high-dimensional linear models with error-in-variables. These confidence regions are based on the construction of moment conditions that have an additional orthogonal property with respect to nuisance parameters. We provide a construction that requires us to estimate an additional high-dimensional linear model with error-in-variables for each component of interest. We use a multiplier bootstrap to compute critical values for simultaneous confidence intervals for a subset $S$ of the components. We show its validity despite of possible model selection mistakes, and allowing for the cardinality of $S$ to be larger than the sample size. We apply and discuss the implications of our results to two examples and conduct Monte Carlo simulations to illustrate the performance of the proposed procedure.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا