Do you want to publish a course? Click here

The least favorable noise

140   0   0.0 ( 0 )
 Added by Philip Ernst
 Publication date 2021
  fields
and research's language is English




Ask ChatGPT about the research

Suppose that a random variable $X$ of interest is observed perturbed by independent additive noise $Y$. This paper concerns the the least favorable perturbation $hat Y_ep$, which maximizes the prediction error $E(X-E(X|X+Y))^2$ in the class of $Y$ with $ var (Y)leq ep$. We find a characterization of the answer to this question, and show by example that it can be surprisingly complicated. However, in the special case where $X$ is infinitely divisible, the solution is complete and simple. We also explore the conjecture that noisier $Y$ makes prediction worse.



rate research

Read More

63 - R. Carrizo Vergara 2021
We show that any (real) generalized stochastic process over $mathbb{R}^{d}$ can be expressed as a linear transformation of a White Noise process over $mathbb{R}^{d}$. The procedure is done by using the regularity theorem for tempered distributions to obtain a mean-square continuous stochastic process which is then expressed in a Karhunen-Lo`eve expansion with respect to a convenient Hilbert space. This result also allows to conclude that any generalized stochastic process can be expressed as a series expansion of deterministic tempered distributions weighted by uncorrelated random variables with square-summable variances. A result specifying when a generalized stochastic process can be linearly transformed into a White Noise is also presented.
271 - Gilles Pages 2020
In this paper, we focus on non-asymptotic bounds related to the Euler scheme of an ergodic diffusion with a possibly multiplicative diffusion term (non-constant diffusion coefficient). More precisely, the objective of this paper is to control the distance of the standard Euler scheme with decreasing step (usually called Unajusted Langevin Algorithm in the Monte-Carlo literature) to the invariant distribution of such an ergodic diffusion. In an appropriate Lyapunov setting and under uniform ellipticity assumptions on the diffusion coefficient, we establish (or improve) such bounds for Total Variation and L 1-Wasserstein distances in both multiplicative and additive and frameworks. These bounds rely on weak error expansions using Stochastic Analysis adapted to decreasing step setting.
A continuous-time nonlinear regression model with Levy-driven linear noise process is considered. Sufficient conditions of consistency and asymptotic normality of the Whittle estimator for the parameter of the noise spectral density are obtained in the paper.
We derive consistent and asymptotically normal estimators for the drift and volatility parameters of the stochastic heat equation driven by an additive space-only white noise when the solution is sampled discretely in the physical domain. We consider both the full space and the bounded domain. We establish the exact spatial regularity of the solution, which in turn, using power-variation arguments, allows building the desired estimators. We show that naive approximations of the derivatives appearing in the power-variation based estimators may create nontrivial biases, which we compute explicitly. The proofs are rooted in Malliavin-Steins method.
161 - Qiang Sun , Rui Mao , Wen-Xin Zhou 2021
This paper proposes the capped least squares regression with an adaptive resistance parameter, hence the name, adaptive capped least squares regression. The key observation is, by taking the resistant parameter to be data dependent, the proposed estimator achieves full asymptotic efficiency without losing the resistance property: it achieves the maximum breakdown point asymptotically. Computationally, we formulate the proposed regression problem as a quadratic mixed integer programming problem, which becomes computationally expensive when the sample size gets large. The data-dependent resistant parameter, however, makes the loss function more convex-like for larger-scale problems. This makes a fast randomly initialized gradient descent algorithm possible for global optimization. Numerical examples indicate the superiority of the proposed estimator compared with classical methods. Three data applications to cancer cell lines, stationary background recovery in video surveillance, and blind image inpainting showcase its broad applicability.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا