Do you want to publish a course? Click here

On statistical deficiency: Why the test statistic of the matching method is hopelessly underpowered and uniquely informative

89   0   0.0 ( 0 )
 Added by Michael Nelson
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

The random variate m is, in combinatorics, a basis for comparing permutations, as well as the solution to a centuries-old riddle involving the mishandling of hats. In statistics, m is the test statistic for a disused null hypothesis statistical test (NHST) of association, the matching method. In this paper, I show that the matching method has an absolute and relatively low limit on its statistical power. I do so first by reinterpreting Raes theorem, which describes the joint distributions of m with several rank correlation statistics under a true null. I then derive this property solely from ms unconditional sampling distribution, on which basis I develop the concept of a deficient statistic: a statistic that is insufficient and inconsistent and inefficient with respect to its parameter. Finally, I demonstrate an application for m that makes use of its deficiency to qualify the sampling error in a jointly estimated sample correlation.



rate research

Read More

The role of probability appears unchallenged as the key measure of uncertainty, used among other things for practical induction in the empirical sciences. Yet, Popper was emphatic in his rejection of inductive probability and of the logical probability of hypotheses; furthermore, for him, the degree of corroboration cannot be a probability. Instead he proposed a deductive method of testing. In many ways this dialectic tension has many parallels in statistics, with the Bayesians on logico-inductive side vs the non-Bayesians or the frequentists on the other side. Simplistically Popper seems to be on the frequentist side, but recent synthesis on the non-Bayesian side might direct the Popperian views to a more nuanced destination. Logical probability seems perfectly suited to measure partial evidence or support, so what can we use if we are to reject it? For the past 100 years, statisticians have also developed a related concept called likelihood, which has played a central role in statistical modelling and inference. Remarkably, this Fisherian concept of uncertainty is largely unknown or at least severely under-appreciated in non-statistical literature. As a measure of corroboration, the likelihood satisfies the Popperian requirement that it is not a probability. Our aim is to introduce the likelihood and its recent extension via a discussion of two well-known logical fallacies in order to highlight that its lack of recognition may have led to unnecessary confusion in our discourse about falsification and corroboration of hypotheses. We highlight the 100 years of development of likelihood concepts. The year 2021 will mark the 100-year anniversary of the likelihood, so with this paper we wish it a long life and increased appreciation in non-statistical literature.
Spike proteins, 1200 amino acids, are divided into two nearly equal parts, S1 and S2. We review here phase transition theory, implemented quantitatively by thermodynamic scaling. The theory explains the evolution of Coronavirus extremely high contagiousness caused by a few mutations from CoV2003 to CoV2019 identified among hundreds in S1. The theory previously predicted the unprecedented success of spike-based vaccines. Here we analyze impressive successes by McClellan et al., 2020, in stabilizing their original S2P vaccine to Hexapro. Hexapro has expanded the two proline mutations of S2P, 2017, to six combined proline mutations in S2. Their four new mutations are the result of surveying 100 possibilities in their detailed structure-based context Our analysis, based on only sparse publicly available data, suggests new proline mutations could improve the Hexapro combination to Octapro or beyond.
We provide accessible insight into the current replication crisis in statistical science, by revisiting the old metaphor of court trial as hypothesis test. Inter alia, we define and diagnose harmful statistical witch-hunting both in justice and science, which extends to the replication crisis itself, where a hunt on p-values is currently underway.
We propose a new pattern-matching algorithm for matching CCD images to a stellar catalogue based statistical method in this paper. The method of constructing star pairs can greatly reduce the computational complexity compared with the triangle method. We use a subsample of the brightest objects from the image and reference catalogue, and then find a coordinate transformation between the image and reference catalogue based on the statistical information of star pairs. Then all the objects are matched based on the initial plate solution. The matching process can be accomplished in several milliseconds for the observed images taken by Yunnan observatory 1-m telescope.
260 - Wei Xu , Wen Chen , Yingjie Liang 2017
This study is to investigate the feasibility of least square method in fitting non-Gaussian noise data. We add different levels of the two typical non-Gaussian noises, Levy and stretched Gaussian noises, to exact value of the selected functions including linear equations, polynomial and exponential equations, and the maximum absolute and the mean square errors are calculated for the different cases. Levy and stretched Gaussian distributions have many applications in fractional and fractal calculus. It is observed that the non-Gaussian noises are less accurately fitted than the Gaussian noise, but the stretched Gaussian cases appear to perform better than the Levy noise cases. It is stressed that the least-squares method is inapplicable to the non-Gaussian noise cases when the noise level is larger than 5%.

suggested questions

comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا