ترغب بنشر مسار تعليمي؟ اضغط هنا

How to make any method fail: BAMM at the kangaroo court of false equivalency

241   0   0.0 ( 0 )
 نشر من قبل Daniel Rabosky
 تاريخ النشر 2017
  مجال البحث علم الأحياء
والبحث باللغة English
 تأليف Daniel L Rabosky




اسأل ChatGPT حول البحث

The software program BAMM has been widely used to study the dynamics of speciation, extinction, and phenotypic evolution on phylogenetic trees. The program implements a model-based clustering algorithm to identify clades that share common macroevolutionary rate dynamics and to estimate rate parameters. A recent simulation study published in Evolution (2017) by Meyer and Wiens (M&W) claimed that (i) simple (MS) estimators of diversification rates perform much better than BAMM, and (ii) evolutionary rates inferred with BAMM are weakly correlated with the true rates in the generating model. I demonstrate that their assessment suffers from two major conceptual errors that invalidate both primary conclusions. These statistical considerations are not specific to BAMM and apply to all methods for estimating parameters from empirical data where the true grouping structure of the data is unknown. First, M&Ws comparisons between BAMM and MS estimators suffer from false equivalency because the MS estimators are given perfect prior knowledge of the locations of rate shifts on the simulated phylogenies. BAMM is given no such information and must simultaneously estimate the number and location of rate shifts from the data, thus resulting in a massive degrees-of-freedom advantage for the MS estimators.When both methods are given equivalent information, BAMM dramatically outperforms the MS estimators. Second, M&Ws experimental design is unable to assess parameter reliability because their analyses conflate small effect sizes across treatment groups with error in parameter estimates. Nearly all model-based frameworks for partitioning data are susceptible to the statistical mistakes in M&W, including popular clustering algorithms in population genetics, phylogenetics, and comparative methods.

قيم البحث

اقرأ أيضاً

1. Joint Species Distribution models (JSDMs) explain spatial variation in community composition by contributions of the environment, biotic associations, and possibly spatially structured residual covariance. They show great promise as a general anal ytical framework for community ecology and macroecology, but current JSDMs, even when approximated by latent variables, scale poorly on large datasets, limiting their usefulness for currently emerging big (e.g., metabarcoding and metagenomics) community datasets. 2. Here, we present a novel, more scalable JSDM (sjSDM) that circumvents the need to use latent variables by using a Monte-Carlo integration of the joint JSDM likelihood and allows flexible elastic net regularization on all model components. We implemented sjSDM in PyTorch, a modern machine learning framework that can make use of CPU and GPU calculations. Using simulated communities with known species-species associations and different number of species and sites, we compare sjSDM with state-of-the-art JSDM implementations to determine computational runtimes and accuracy of the inferred species-species and species-environmental associations. 3. We find that sjSDM is orders of magnitude faster than existing JSDM algorithms (even when run on the CPU) and can be scaled to very large datasets. Despite the dramatically improved speed, sjSDM produces more accurate estimates of species association structures than alternative JSDM implementations. We demonstrate the applicability of sjSDM to big community data using eDNA case study with thousands of fungi operational taxonomic units (OTU). 4. Our sjSDM approach makes the analysis of JSDMs to large community datasets with hundreds or thousands of species possible, substantially extending the applicability of JSDMs in ecology. We provide our method in an R package to facilitate its applicability for practical data analysis.
183 - Francesca Bassi 2020
During the current Covid-19 pandemic in Italy, official data are collected with medical swabs following a pure convenience criterion which, at least in an early phase, has privileged the exam of patients showing evident symptoms. However, there are e vidences of a very high proportion of asymptomatic patients (e. g. Aguilar et al., 2020; Chugthai et al, 2020; Li, et al., 2020; Mizumoto et al., 2020a, 2020b and Yelin et al., 2020). In this situation, in order to estimate the real number of infected (and to estimate the lethality rate), it should be necessary to run a properly designed sample survey through which it would be possible to calculate the probability of inclusion and hence draw sound probabilistic inference. Some researchers proposed estimates of the total prevalence based on various approaches, including epidemiologic models, time series and the analysis of data collected in countries that faced the epidemic in earlier time (Brogi et al., 2020). In this paper, we propose to estimate the prevalence of Covid-19 in Italy by reweighting the available official data published by the Istituto Superiore di Sanit`a so as to obtain a more representative sample of the Italian population. Reweighting is a procedure commonly used to artificially modify the sample composition so as to obtain a distribution which is more similar to the population (Valliant et al., 2018). In this paper, we will use post-stratification of the official data, in order to derive the weights necessary for reweighting them using age and gender as post-stratification variables thus obtaining more reliable estimation of prevalence and lethality.
Empirical studies show that epidemiological models based on an epidemics initial spread rate often fail to predict the true scale of that epidemic. Most epidemics with a rapid early rise die out before affecting a significant fraction of the populati on, whereas the early pace of some pandemics is rather modest. Recent models suggest that this could be due to the heterogeneity of the target populations susceptibility. We study a computer malware ecosystem exhibiting spread mechanisms resembling those of biological systems while offering details unavailable for human epidemics. Rather than comparing models, we directly estimate reach from a new and vastly more complete data from a parallel domain, that offers superior details and insight as concerns biological outbreaks. We find a highly heterogeneous distribution of computer susceptibilities, with nearly all outbreaks initially over-affecting the tail of the distribution, then collapsing quickly once this tail is depleted. This mechanism restricts the correlation between an epidemics initial growth rate and its total reach, thus preventing the majority of epidemics, including initially fast-growing outbreaks, from reaching a macroscopic fraction of the population. The few pervasive malwares distinguish themselves early on via the following key trait: they avoid infecting the tail, while preferentially targeting computers unaffected by typical malware.
In this paper, decision theory was used to derive Bayes and minimax decision rules to estimate allelic frequencies and to explore their admissibility. Decision rules with uniformly smallest risk usually do not exist and one approach to solve this pro blem is to use the Bayes principle and the minimax principle to find decision rules satisfying some general optimality criterion based on their risk functions. Two cases were considered, the simpler case of biallelic loci and the more complex case of multiallelic loci. For each locus, the sampling model was a multinomial distribution and the prior was a Beta (biallelic case) or a Dirichlet (multiallelic case) distribution. Three loss functions were considered: squared error loss (SEL), Kulback-Leibler loss (KLL) and quadratic error loss (QEL). Bayes estimators were derived under these three loss functions and were subsequently used to find minimax estimators using results from decision theory. The Bayes estimators obtained from SEL and KLL turned out to be the same. Under certain conditions, the Bayes estimator derived from QEL led to an admissible minimax estimator (which was also equal to the maximum likelihood estimator). The SEL also allowed finding admissible minimax estimators. Some estimators had uniformly smaller variance than the MLE and under suitable conditions the remaining estimators also satisfied this property. In addition to their statistical properties, the estimators derived here allow variation in allelic frequencies, which is closer to the reality of finite populations exposed to evolutionary forces.
The availability of a large number of assembled genomes opens the way to study the evolution of syntenic character within a phylogenetic context. The DeCo algorithm, recently introduced by B{e}rard et al. allows the computation of parsimonious evolut ionary scenarios for gene adjacencies, from pairs of reconciled gene trees. Following the approach pioneered by Sturmfels and Pachter, we describe how to modify the DeCo dynamic programming algorithm to identify classes of cost schemes that generates similar parsimonious evolutionary scenarios for gene adjacencies, as well as the robustness to changes to the cost scheme of evolutionary events of the presence or absence of specific ancestral gene adjacencies. We apply our method to six thousands mammalian gene families, and show that computing the robustness to changes to cost schemes provides new and interesting insights on the evolution of gene adjacencies and the DeCo model.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا