ترغب بنشر مسار تعليمي؟ اضغط هنا

Evaluating strong measurement noise in data series with simulated annealing method

50   0   0.0 ( 0 )
 نشر من قبل Pedro Lind
 تاريخ النشر 2012
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

Many stochastic time series can be described by a Langevin equation composed of a deterministic and a stochastic dynamical part. Such a stochastic process can be reconstructed by means of a recently introduced nonparametric method, thus increasing the predictability, i.e. knowledge of the macroscopic drift and the microscopic diffusion functions. If the measurement of a stochastic process is affected by additional strong measurement noise, the reconstruction process cannot be applied. Here, we present a method for the reconstruction of stochastic processes in the presence of strong measurement noise, based on a suitably parametrized ansatz. At the core of the process is the minimization of the functional distance between terms containing the conditional moments taken from measurement data, and the corresponding ansatz functions. It is shown that a minimization of the distance by means of a simulated annealing procedure yields better results than a previously used Levenberg-Marquardt algorithm, which permits a rapid and reliable reconstruction of the stochastic process.

قيم البحث

اقرأ أيضاً

104 - K. J. H. Law , A. M. Stuart 2011
Data assimilation leads naturally to a Bayesian formulation in which the posterior probability distribution of the system state, given the observations, plays a central conceptual role. The aim of this paper is to use this Bayesian posterior probabil ity distribution as a gold standard against which to evaluate various commonly used data assimilation algorithms. A key aspect of geophysical data assimilation is the high dimensionality and low predictability of the computational model. With this in mind, yet with the goal of allowing an explicit and accurate computation of the posterior distribution, we study the 2D Navier-Stokes equations in a periodic geometry. We compute the posterior probability distribution by state-of-the-art statistical sampling techniques. The commonly used algorithms that we evaluate against this accurate gold standard, as quantified by comparing the relative error in reproducing its moments, are 4DVAR and a variety of sequential filtering approximations based on 3DVAR and on extended and ensemble Kalman filters. The primary conclusions are that: (i) with appropriate parameter choices, approximate filters can perform well in reproducing the mean of the desired probability distribution; (ii) however they typically perform poorly when attempting to reproduce the covariance; (iii) this poor performance is compounded by the need to modify the covariance, in order to induce stability. Thus, whilst filters can be a useful tool in predicting mean behavior, they should be viewed with caution as predictors of uncertainty. These conclusions are intrinsic to the algorithms and will not change if the model complexity is increased, for example by employing a smaller viscosity, or by using a detailed NWP model.
In statistical data assimilation (SDA) and supervised machine learning (ML), we wish to transfer information from observations to a model of the processes underlying those observations. For SDA, the model consists of a set of differential equations t hat describe the dynamics of a physical system. For ML, the model is usually constructed using other strategies. In this paper, we develop a systematic formulation based on Monte Carlo sampling to achieve such information transfer. Following the derivation of an appropriate target distribution, we present the formulation based on the standard Metropolis-Hasting (MH) procedure and the Hamiltonian Monte Carlo (HMC) method for performing the high dimensional integrals that appear. To the extensive literature on MH and HMC, we add (1) an annealing method using a hyperparameter that governs the precision of the model to identify and explore the highest probability regions of phase space dominating those integrals, and (2) a strategy for initializing the state space search. The efficacy of the proposed formulation is demonstrated using a nonlinear dynamical model with chaotic solutions widely used in geophysics.
91 - James P. Bagrow 2007
We present a new benchmarking procedure that is unambiguous and specific to local community-finding methods, allowing one to compare the accuracy of various methods. We apply this to new and existing algorithms. A simple class of synthetic benchmark networks is also developed, capable of testing properties specific to these local methods.
Approaches for mapping time series to networks have become essential tools for dealing with the increasing challenges of characterizing data from complex systems. Among the different algorithms, the recently proposed ordinal networks stand out due to its simplicity and computational efficiency. However, applications of ordinal networks have been mainly focused on time series arising from nonlinear dynamical systems, while basic properties of ordinal networks related to simple stochastic processes remain poorly understood. Here, we investigate several properties of ordinal networks emerging from random time series, noisy periodic signals, fractional Brownian motion, and earthquake magnitude series. For ordinal networks of random series, we present an approach for building the exact form of the adjacency matrix, which in turn is useful for detecting non-random behavior in time series and the existence of missing transitions among ordinal patterns. We find that the average value of a local entropy, estimated from transition probabilities among neighboring nodes of ordinal networks, is more robust against noise addition than the standard permutation entropy. We show that ordinal networks can be used for estimating the Hurst exponent of time series with accuracy comparable with state-of-the-art methods. Finally, we argue that ordinal networks can detect sudden changes in Earth seismic activity caused by large earthquakes.
In this work we investigate the origin of the parabolic relation between skewness and kurtosis often encountered in the analysis of experimental time-series. We argue that the numerical values of the coefficients of the curve may provide informations about the specific physics of the system studied, whereas the analytical curve per se is a fairly general consequence of a few constraints expected to hold for most systems.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا