ترغب بنشر مسار تعليمي؟ اضغط هنا

The piranha problem: Large effects swimming in a small pond

59   0   0.0 ( 0 )
 نشر من قبل Christopher Tosh
 تاريخ النشر 2021
  مجال البحث الاحصاء الرياضي
والبحث باللغة English




اسأل ChatGPT حول البحث

In some scientific fields, it is common to have certain variables of interest that are of particular importance and for which there are many studies indicating a relationship with a different explanatory variable. In such cases, particularly those where no relationships are known among explanatory variables, it is worth asking under what conditions it is possible for all such claimed effects to exist simultaneously. This paper addresses this question by reviewing some theorems from multivariate analysis that show, unless the explanatory variables also have sizable effects on each other, it is impossible to have many such large effects. We also discuss implications for the replication crisis in social science.



قيم البحث

اقرأ أيضاً

We consider a linear regression model, with the parameter of interest a specified linear combination of the regression parameter vector. We suppose that, as a first step, a data-based model selection (e.g. by preliminary hypothesis tests or minimizin g AIC) is used to select a model. It is common statistical practice to then construct a confidence interval for the parameter of interest based on the assumption that the selected model had been given to us a priori. This assumption is false and it can lead to a confidence interval with poor coverage properties. We provide an easily-computed finite sample upper bound (calculated by repeated numerical evaluation of a double integral) to the minimum coverage probability of this confidence interval. This bound applies for model selection by any of the following methods: minimum AIC, minimum BIC, maximum adjusted R-squared, minimum Mallows Cp and t-tests. The importance of this upper bound is that it delineates general categories of design matrices and model selection procedures for which this confidence interval has poor coverage properties. This upper bound is shown to be a finite sample analogue of an earlier large sample upper bound due to Kabaila and Leeb.
The use of entropy related concepts goes from physics, such as in statistical mechanics, to evolutionary biology. The Shannon entropy is a measure used to quantify the amount of information in a system, and its estimation is usually made under the fr equentist approach. In the present paper, we introduce an fully objective Bayesian analysis to obtain this measures posterior distribution. Notably, we consider the Gamma distribution, which describes many natural phenomena in physics, engineering, and biology. We reparametrize the model in terms of entropy, and different objective priors are derived, such as Jeffreys prior, reference prior, and matching priors. Since the obtained priors are improper, we prove that the obtained posterior distributions are proper and their respective posterior means are finite. An intensive simulation study is conducted to select the prior that returns better results in terms of bias, mean square error, and coverage probabilities. The proposed approach is illustrated in two datasets, where the first one is related to the Achaemenid dynasty reign period, and the second data describes the time to failure of an electronic component in the sugarcane harvest machine.
134 - Denis Chetverikov 2012
Monotonicity is a key qualitative prediction of a wide array of economic models derived via robust comparative statics. It is therefore important to design effective and practical econometric methods for testing this prediction in empirical analysis. This paper develops a general nonparametric framework for testing monotonicity of a regression function. Using this framework, a broad class of new tests is introduced, which gives an empirical researcher a lot of flexibility to incorporate ex ante information she might have. The paper also develops new methods for simulating critical values, which are based on the combination of a bootstrap procedure and new selection algorithms. These methods yield tests that have correct asymptotic size and are asymptotically nonconservative. It is also shown how to obtain an adaptive rate optimal test that has the best attainable rate of uniform consistency against models whose regression function has Lipschitz-continuous first-order derivatives and that automatically adapts to the unknown smoothness of the regression function. Simulations show that the power of the new tests in many cases significantly exceeds that of some prior tests, e.g. that of Ghosal, Sen, and Van der Vaart (2000). An application of the developed procedures to the dataset of Ellison and Ellison (2011) shows that there is some evidence of strategic entry deterrence in pharmaceutical industry where incumbents may use strategic investment to prevent generic entries when their patents expire.
80 - Kai Ni , Shanshan Cao , 2020
Given an inhomogeneous chain embedded in a noisy image, we consider the conditions under which such an embedded chain is detectable. Many applications, such as detecting moving objects, detecting ship wakes, can be abstracted as the detection on the existence of chains. In this work, we provide the detection algorithm with low order of computation complexity to detect the chain and the optimal theoretical detectability regarding SNR (signal to noise ratio) under the normal distribution model. Specifically, we derive an analytical threshold that specifies what is detectable. We design a longest significant chain detection algorithm, with computation complexity in the order of $O(nlog n)$. We also prove that our proposed algorithm is asymptotically powerful, which means, as the dimension $n rightarrow infty$, the probability of false detection vanishes. We further provide some simulated examples and a real data example, which validate our theory.
For in vivo research experiments with small sample sizes and available historical data, we propose a sequential Bayesian method for the Behrens-Fisher problem. We consider it as a model choice question with two models in competition: one for which th e two expectations are equal and one for which they are different. The choice between the two models is performed through a Bayesian analysis, based on a robust choice of combined objective and subjective priors, set on the parameters space and on the models space. Three steps are necessary to evaluate the posterior probability of each model using two historical datasets similar to the one of interest. Starting from the Jeffreys prior, a posterior using a first historical dataset is deduced and allows to calibrate the Normal-Gamma informative priors for the second historical dataset analysis, in addition to a uniform prior on the model space. From this second step, a new posterior on the parameter space and the models space can be used as the objective informative prior for the last Bayesian analysis. Bayesian and frequentist methods have been compared on simulated and real data. In accordance with FDA recommendations, control of type I and type II error rates has been evaluated. The proposed method controls them even if the historical experiments are not completely similar to the one of interest.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا