Do you want to publish a course? Click here

Accuracy analysis of the box-counting algorithm

119   0   0.0 ( 0 )
 Added by Andrzej Z. Gorski
 Publication date 2011
  fields Physics
and research's language is English




Ask ChatGPT about the research

Accuracy of the box-counting algorithm for numerical computation of the fractal exponents is investigated. To this end several sample mathematical fractal sets are analyzed. It is shown that the standard deviation obtained for the fit of the fractal scaling in the log-log plot strongly underestimates the actual error. The real computational error was found to have power scaling with respect to the number of data points in the sample ($n_{tot}$). For fractals embedded in two-dimensional space the error is larger than for those embedded in one-dimensional space. For fractal functions the error is even larger. Obtained formula can give more realistic estimates for the computed generalized fractal exponents accuracy.



rate research

Read More

In this work we consider information-theoretical observables to analyze short symbolic sequences, comprising time-series that represent the orientation of a single spin in a $2D$ Ising ferromagnet on a square lattice of size $L^2=128^2$, for different system temperatures $T$. The latter were chosen from an interval enclosing the critical point $T_{rm c}$ of the model. At small temperatures the sequences are thus very regular, at high temperatures they are maximally random. In the vicinity of the critical point, nontrivial, long-range correlations appear. Here, we implement estimators for the entropy rate, excess entropy (i.e. complexity) and multi-information. First, we implement a Lempel-Ziv string parsing scheme, providing seemingly elaborate entropy rate and multi-information estimates and an approximate estimator for the excess entropy. Furthermore, we apply easy-to-use black-box data compression utilities, providing approximate estimators only. For comparison and to yield results for benchmarking purposes we implement the information-theoretic observables also based on the well-established M-block Shannon entropy, which is more tedious to apply compared to the the first two algorithmic entropy estimation procedures. To test how well one can exploit the potential of such data compression techniques, we aim at detecting the critical point of the $2D$ Ising ferromagnet. Among the above observables, the multi-information, which is known to exhibit an isolated peak at the critical point, is very easy to replicate by means of both efficient algorithmic entropy estimation procedures. Finally, we assess how good the various algorithmic entropy estimates compare to the more conventional block entropy estimates and illustrate a simple modification that yields enhanced results.
Ultracold neutrons (UCN) with kinetic energies up to 300 neV can be stored in material or magnetic confinements for hundreds of seconds. This makes them a very useful tool for probing fundamental symmetries of nature, by searching for charge-parity violation by a neutron electric dipole moment, and yielding important parameters for Big Bang nucleosynthesis, e.g. in neutron-lifetime measurements. Further increasing the intensity of UCN sources is crucial for next-generation experiments. Advanced Monte Carlo (MC) simulation codes are important in optimization of neutron optics of UCN sources and of experiments, but also in estimation of systematic effects, and in bench-marking of analysis codes. Here we will give a short overview of recent MC simulation activities in this field.
99 - Alberto Ramos 2018
Automatic Differentiation (AD) allows to determine exactly the Taylor series of any function truncated at any order. Here we propose to use AD techniques for Monte Carlo data analysis. We discuss how to estimate errors of a general function of measured observables in different Monte Carlo simulations. Our proposal combines the $Gamma$-method with Automatic differentiation, allowing exact error propagation in arbitrary observables, even those defined via iterative algorithms. The case of special interest where we estimate the error in fit parameters is discussed in detail. We also present a freely available fortran reference implementation of the ideas discussed in this work.
We use a machine learning approach to identify the importance of microstructure characteristics in causing magnetization reversal in ideally structured large-grained Nd$_2$Fe$_{14}$B permanent magnets. The embedded Stoner-Wohlfarth method is used as a reduced order model for determining local switching field maps which guide the data-driven learning procedure. The predictor model is a random forest classifier which we validate by comparing with full micromagnetic simulations in the case of small granular test structures. In the course of the machine learning microstructure analysis the most important features explaining magnetization reversal were found to be the misorientation and the position of the grain within the magnet. The lowest switching fields occur near the top and bottom edges of the magnet. While the dependence of the local switching field on the grain orientation is known from theory, the influence of the position of the grain on the local coercive field strength is less obvious. As a direct result of our findings of the machine learning analysis we show that edge hardening via Dy-diffusion leads to higher coercive fields.
The GooFit Framework is designed to perform maximum-likelihood fits for arbitrary functions on various parallel back ends, for example a GPU. We present an extension to GooFit which adds the functionality to perform time-dependent amplitude analyses of pseudoscalar mesons decaying into four pseudoscalar final states. Benchmarks of this functionality show a significant performance increase when utilizing a GPU compared to a CPU. Furthermore, this extension is employed to study the sensitivity on the $D^0 - bar{D}^0$ mixing parameters $x$ and $y$ in a time-dependent amplitude analysis of the decay $D^0 rightarrow K^+pi^-pi^+pi^-$. Studying a sample of 50 000 events and setting the central values to the world average of $x = (0.49 pm0.15) %$ and $y = (0.61 pm0.08) %$, the statistical sensitivities of $x$ and $y$ are determined to be $sigma(x) = 0.019 %$ and $sigma(y) = 0.019 %$.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا