Do you want to publish a course? Click here

On the Correct Use of Statistical Tests: Reply to Lies, damned lies and statistics (in Geology)

90   0   0.0 ( 0 )
 Added by Didier Sornette
 Publication date 2010
  fields Physics
and research's language is English
 Authors D. Sornette




Ask ChatGPT about the research

In a Forum published in EOS Transactions AGU (2009) entitled Lies, damned lies and statistics (in Geology), Vermeesch (2009) claims that statistical significant is not the same as geological significant, in other words, statistical tests may be misleading. In complete contradiction, we affirm that statistical tests are always informative. We detail the several mistakes of Vermeesch in his initial paper and in his comments to our reply. The present text is developed in the hope that it can serve as an illuminating pedagogical exercise for students and lecturers to learn more about the subtleties, richness and power of the science of statistics.



rate research

Read More

It is shown that prize changes of the US dollar - German Mark exchange rates upon different delay times can be regarded as a stochastic Marcovian process. Furthermore we show that from the empirical data the Kramers-Moyal coefficients can be estimated. Finally, we present an explicite Fokker-Planck equation which models very precisely the empirical probabilitiy distributions.
63 - Matthew Burke 2016
We formulate and prove a twofold generalisation of Lies second theorem that integrates homomorphisms between formal group laws to homomorphisms between Lie groups. Firstly we generalise classical Lie theory by replacing groups with categories. Secondly we include categories whose underlying spaces are not smooth manifolds. The main intended application is when we replace the category of smooth manifolds with a well-adapted model of synthetic differential geometry. In addition we provide an axiomatic system that specifies the abstract structures that are required to prove Lies second theorem. As a part of this abstract structure we define the notion of enriched mono-coreflective subcategory which makes precise the notion of a subcategory of local models.
After carefully studying the comment by Wang et al. (arXiv:1408.6420), we found it includes several mistakes and unjustified statements and Wang et al. lack very basic knowledge of dislocations. Moreover, there is clear evidence indicating that Wang et al. significantly misrepresented our method and claimed something that they actually did not implement.
182 - I. Grabec 2007
Statistical modeling of experimental physical laws is based on the probability density function of measured variables. It is expressed by experimental data via a kernel estimator. The kernel is determined objectively by the scattering of data during calibration of experimental setup. A physical law, which relates measured variables, is optimally extracted from experimental data by the conditional average estimator. It is derived directly from the kernel estimator and corresponds to a general nonparametric regression. The proposed method is demonstrated by the modeling of a return map of noisy chaotic data. In this example, the nonparametric regression is used to predict a future value of chaotic time series from the present one. The mean predictor error is used in the definition of predictor quality, while the redundancy is expressed by the mean square distance between data points. Both statistics are used in a new definition of predictor cost function. From the minimum of the predictor cost function, a proper number of data in the model is estimated.
374 - I. Grabec 2007
Redundancy of experimental data is the basic statistic from which the complexity of a natural phenomenon and the proper number of experiments needed for its exploration can be estimated. The redundancy is expressed by the entropy of information pertaining to the probability density function of experimental variables. Since the calculation of entropy is inconvenient due to integration over a range of variables, an approximate expression for redundancy is derived that includes only a sum over the set of experimental data about these variables. The approximation makes feasible an efficient estimation of the redundancy of data along with the related experimental information and information cost function. From the experimental information the complexity of the phenomenon can be simply estimated, while the proper number of experiments needed for its exploration can be determined from the minimum of the cost function. The performance of the approximate estimation of these statistics is demonstrated on two-dimensional normally distributed random data.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا