ترغب بنشر مسار تعليمي؟ اضغط هنا

Surrogates with random Fourier Phases

263   0   0.0 ( 0 )
 نشر من قبل Christoph Raeth
 تاريخ النشر 2008
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

The method of surrogates is widely used in the field of nonlinear data analysis for testing for weak nonlinearities. The two most commonly used algorithms for generating surrogates are the amplitude adjusted Fourier transform (AAFT) and the iterated amplitude adjusted Fourier transfom (IAAFT) algorithm. Both the AAFT and IAAFT algorithm conserve the amplitude distribution in real space and reproduce the power spectrum (PS) of the original data set very accurately. The basic assumption in both algorithms is that higher-order correlations can be wiped out using a Fourier phase randomization procedure. In both cases, however, the randomness of the Fourier phases is only imposed before the (first) Fourier back tranformation. Until now, it has not been studied how the subsequent remapping and iteration steps may affect the randomness of the phases. Using the Lorenz system as an example, we show that both algorithms may create surrogate realizations containing Fourier phase correlations. We present two new iterative surrogate data generating methods being able to control the randomization of Fourier phases at every iteration step. The resulting surrogate realizations which are truly linear by construction display all properties needed for surrogate data.

قيم البحث

اقرأ أيضاً

This work presents a method of computing Voigt functions and their derivatives, to high accuracy, on a uniform grid. It is based on an adaptation of Fourier-transform based convolution. The relative error of the result decreases as the fourth power o f the computational effort. Because of its use of highly vectorizable operations for its core, it can be implemented very efficiently in scripting language environments which provide fast vector libraries. The availability of the derivatives makes it suitable as a function generator for non-linear fitting procedures.
56 - P. Reegen 2007
Identifying frequencies with low signal-to-noise ratios in time series of stellar photometry and spectroscopy, and measuring their amplitude ratios and peak widths accurately, are critical goals for asteroseismology. These are also challenges for tim e series with gaps or whose data are not sampled at a constant rate, even with modern Discrete Fourier Transform (DFT) software. Also the False-Alarm Probability introduced by Lomb and Scargle is an approximation which becomes less reliable in time series with longer data gaps. A rigorous statistical treatment of how to determine the significance of a peak in a DFT, called SigSpec, is presented here. SigSpec is based on an analytical solution of the probability that a DFT peak of a given amplitude does not arise from white noise in a non-equally spaced data set. The underlying Probability Density Function (PDF) of the amplitude spectrum generated by white noise can be derived explicitly if both frequency and phase are incorporated into the solution. In this paper, I define and evaluate an unbiased statistical estimator, the spectral significance, which depends on frequency, amplitude, and phase in the DFT, and which takes into account the time-domain sampling. I also compare this estimator to results from other well established techniques and demonstrate the effectiveness of SigSpec with a few examples of ground- and space-based photometric data, illustratring how SigSpec deals with the effects of noise and time-domain sampling in determining significant frequencies.
A novel version of the Continuous-Time Random Walk (CTRW) model with memory is developed. This memory means the dependence between arbitrary number of successive jumps of the process, while waiting times between jumps are considered as i.i.d. random variables. The dependence was found by analysis of empirical histograms for the stochastic process of a single share price on a market within the high frequency time scale, and justified theoretically by considering bid-ask bounce mechanism containing some delay characteristic for any double-auction market. Our model turns out to be exactly analytically solvable, which enables a direct comparison of its predictions with their empirical counterparts, for instance, with empirical velocity autocorrelation function. Thus this paper significantly extends the capabilities of the CTRW formalism.
Exponential Random Graph Models (ERGMs) have gained increasing popularity over the years. Rooted into statistical physics, the ERGMs framework has been successfully employed for reconstructing networks, detecting statistically significant patterns in graphs, counting networked configurations with given properties. From a technical point of view, the ERGMs workflow is defined by two subsequent optimization steps: the first one concerns the maximization of Shannon entropy and leads to identify the functional form of the ensemble probability distribution that is maximally non-committal with respect to the missing information; the second one concerns the maximization of the likelihood function induced by this probability distribution and leads to its numerical determination. This second step translates into the resolution of a system of $O(N)$ non-linear, coupled equations (with $N$ being the total number of nodes of the network under analysis), a problem that is affected by three main issues, i.e. accuracy, speed and scalability. The present paper aims at addressing these problems by comparing the performance of three algorithms (i.e. Newtons method, a quasi-Newton method and a recently-proposed fixed-point recipe) in solving several ERGMs, defined by binary and weighted constraints in both a directed and an undirected fashion. While Newtons method performs best for relatively little networks, the fixed-point recipe is to be preferred when large configurations are considered, as it ensures convergence to the solution within seconds for networks with hundreds of thousands of nodes (e.g. the Internet, Bitcoin). We attach to the paper a Python code implementing the three aforementioned algorithms on all the ERGMs considered in the present work.
The usual development of the continuous time random walk (CTRW) assumes that jumps and time intervals are a two-dimensional set of independent and identically distributed random variables. In this paper we address the theoretical setting of non-indep endent CTRWs where consecutive jumps and/or time intervals are correlated. An exact solution to the problem is obtained for the special but relevant case in which the correlation solely depends on the signs of consecutive jumps. Even in this simple case some interesting features arise such as transitions from unimodal to bimodal distributions due to correlation. We also develop the necessary analytical techniques and approximations to handle more general situations that can appear in practice.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا