Do you want to publish a course? Click here

Robust Estimation of Effective Diffusions from Multiscale Data

143   0   0.0 ( 0 )
 Added by Giacomo Garegnani
 Publication date 2021
and research's language is English




Ask ChatGPT about the research

We present a methodology based on filtered data and moving averages for estimating robustly effective dynamics from observations of multiscale systems. We show in a semi-parametric framework of the Langevin type that the method we propose is asymptotically unbiased with respect to homogenization theory. Moreover, we demonstrate with a series of numerical experiments that the method we propose here outperforms traditional techniques for extracting coarse-grained dynamics from data, such as subsampling, in terms of bias and of robustness.



rate research

Read More

We study the problem of drift estimation for two-scale continuous time series. We set ourselves in the framework of overdamped Langevin equations, for which a single-scale surrogate homogenized equation exists. In this setting, estimating the drift coefficient of the homogenized equation requires pre-processing of the data, often in the form of subsampling; this is because the two-scale equation and the homogenized single-scale equation are incompatible at small scales, generating mutually singular measures on the path space. We avoid subsampling and work instead with filtered data, found by application of an appropriate kernel function, and compute maximum likelihood estimators based on the filtered process. We show that the estimators we propose are asymptotically unbiased and demonstrate numerically the advantages of our method with respect to subsampling. Finally, we show how our filtered data methodology can be combined with Bayesian techniques and provide a full uncertainty quantification of the inference procedure.
We propose a novel method for drift estimation of multiscale diffusion processes when a sequence of discrete observations is given. For the Langevin dynamics in a two-scale potential, our approach relies on the eigenvalues and the eigenfunctions of the homogenized dynamics. Our first estimator is derived from a martingale estimating function of the generator of the homogenized diffusion process. However, the unbiasedness of the estimator depends on the rate with which the observations are sampled. We therefore introduce a second estimator which relies also on filtering the data and we prove that it is asymptotically unbiased independently of the sampling rate. A series of numerical experiments illustrate the reliability and efficiency of our different estimators.
We consider the inference problem for parameters in stochastic differential equation models from discrete time observations (e.g. experimental or simulation data). Specifically, we study the case where one does not have access to observations of the model itself, but only to a perturbed version which converges weakly to the solution of the model. Motivated by this perturbation argument, we study the convergence of estimation procedures from a numerical analysis point of view. More precisely, we introduce appropriate consistency, stability, and convergence concepts and study their connection. It turns out that standard statistical techniques, such as the maximum likelihood estimator, are not convergent methodologies in this setting, since they fail to be stable. Due to this shortcoming, we introduce and analyse a novel inference procedure for parameters in stochastic differential equation models which turns out to be convergent. As such, the method is particularly suited for the estimation of parameters in effective (i.e. coarse-grained) models from observations of the corresponding multiscale process. We illustrate these theoretical findings via several numerical examples.
We present a multiscale continuous Galerkin (MSCG) method for the fast and accurate stochastic simulation and optimization of time-harmonic wave propagation through photonic crystals. The MSCG method exploits repeated patterns in the geometry to drastically decrease computational cost and incorporates the following ingredients: (1) a reference domain formulation that allows us to treat geometric variability resulting from manufacturing uncertainties; (2) a reduced basis approximation to solve the parametrized local subproblems; (3) a gradient computation of the objective function; and (4) a model and variance reduction technique that enables the accelerated computation of statistical outputs by exploiting the statistical correlation between the MSCG solution and the reduced basis approximation. The proposed method is thus well suited for both deterministic and stochastic simulations, as well as robust design of photonic crystals. We provide convergence and cost analysis of the MSCG method, as well as a simulation results for a waveguide T-splitter and a Z-bend to illustrate its advantages for stochastic simulation and robust design.
127 - Yifan Chen , Thomas Y. Hou 2020
There is an intimate connection between numerical upscaling of multiscale PDEs and scattered data approximation of heterogeneous functions: the coarse variables selected for deriving an upscaled equation (in the former) correspond to the sampled information used for approximation (in the latter). As such, both problems can be thought of as recovering a target function based on some coarse data that are either artificially chosen by an upscaling algorithm, or determined by some physical measurement process. The purpose of this paper is then to study that, under such a setup and for a specific elliptic problem, how the lengthscale of the coarse data, which we refer to as the subsampled lengthscale, influences the accuracy of recovery, given limited computational budgets. Our analysis and experiments identify that, reducing the subsampling lengthscale may improve the accuracy, implying a guiding criterion for coarse-graining or data acquisition in this computationally constrained scenario, especially leading to direct insights for the implementation of the Gamblets method in the numerical homogenization literature. Moreover, reducing the lengthscale to zero may lead to a blow-up of approximation error if the target function does not have enough regularity, suggesting the need for a stronger prior assumption on the target function to be approximated. We introduce a singular weight function to deal with it, both theoretically and numerically. This work sheds light on the interplay of the lengthscale of coarse data, the computational costs, the regularity of the target function, and the accuracy of approximations and numerical simulations.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا