ترغب بنشر مسار تعليمي؟ اضغط هنا

Generalized Stochastic Processes as Linear Transformations of White Noise

64   0   0.0 ( 0 )
 نشر من قبل Ricardo Carrizo Vergara
 تاريخ النشر 2021
  مجال البحث
والبحث باللغة English




اسأل ChatGPT حول البحث

We show that any (real) generalized stochastic process over $mathbb{R}^{d}$ can be expressed as a linear transformation of a White Noise process over $mathbb{R}^{d}$. The procedure is done by using the regularity theorem for tempered distributions to obtain a mean-square continuous stochastic process which is then expressed in a Karhunen-Lo`eve expansion with respect to a convenient Hilbert space. This result also allows to conclude that any generalized stochastic process can be expressed as a series expansion of deterministic tempered distributions weighted by uncorrelated random variables with square-summable variances. A result specifying when a generalized stochastic process can be linearly transformed into a White Noise is also presented.



قيم البحث

اقرأ أيضاً

We derive consistent and asymptotically normal estimators for the drift and volatility parameters of the stochastic heat equation driven by an additive space-only white noise when the solution is sampled discretely in the physical domain. We consider both the full space and the bounded domain. We establish the exact spatial regularity of the solution, which in turn, using power-variation arguments, allows building the desired estimators. We show that naive approximations of the derivatives appearing in the power-variation based estimators may create nontrivial biases, which we compute explicitly. The proofs are rooted in Malliavin-Steins method.
A continuous-time nonlinear regression model with Levy-driven linear noise process is considered. Sufficient conditions of consistency and asymptotic normality of the Whittle estimator for the parameter of the noise spectral density are obtained in the paper.
Suppose that a random variable $X$ of interest is observed perturbed by independent additive noise $Y$. This paper concerns the the least favorable perturbation $hat Y_ep$, which maximizes the prediction error $E(X-E(X|X+Y))^2$ in the class of $Y$ wi th $ var (Y)leq ep$. We find a characterization of the answer to this question, and show by example that it can be surprisingly complicated. However, in the special case where $X$ is infinitely divisible, the solution is complete and simple. We also explore the conjecture that noisier $Y$ makes prediction worse.
To extend several known centered Gaussian processes, we introduce a new centered mixed self-similar Gaussian process called the mixed generalized fractional Brownian motion, which could serve as a good model for a larger class of natural phenomena. T his process generalizes both the well known mixed fractional Brownian motion introduced by Cheridito [10] and the generalized fractional Brownian motion introduced by Zili [31]. We study its main stochastic properties, its non-Markovian and non-stationarity characteristics and the conditions under which it is not a semimartingale. We prove the long range dependence properties of this process.
The approximation of integral functionals with respect to a stationary Markov process by a Riemann-sum estimator is studied. Stationarity and the functional calculus of the infinitesimal generator of the process are used to get a better understanding of the estimation error and to prove a general error bound. The presented approach admits general integrands and gives a unifying explanation for different rates obtained in the literature. Several examples demonstrate how the general bound can be related to well-known function spaces.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا