ترغب بنشر مسار تعليمي؟ اضغط هنا

More testing or more disease? A counterfactual approach to explaining observed increases in positive tests over time

182   0   0.0 ( 0 )
 نشر من قبل Jessica Young PhD
 تاريخ النشر 2019
والبحث باللغة English




اسأل ChatGPT حول البحث

Observed gonorrhea case rates (number of positive tests per 100,000 individuals) increased by 75 percent in the United States between 2009 and 2017, predominantly among men. However, testing recommendations by the Centers for Disease Control and Prevention (CDC) have also changed over this period with more frequent screening for sexually transmitted infections (STIs) recommended among men who have sex with men (MSM) who are sexually active. In this and similar disease surveillance settings, a common question is whether observed increases in the overall proportion of positive tests over time is due only to increased testing of diseased individuals, increased underlying disease or both. By placing this problem within a counterfactual framework, we can carefully consider untestable assumptions under which this question may be answered and, in turn, a principled approach to statistical analysis. This report outlines this thought process.



قيم البحث

اقرأ أيضاً

Alzheimers disease (AD) and Parkinsons disease (PD) are the two most common neurodegenerative disorders in humans. Because a significant percentage of patients have clinical and pathological features of both diseases, it has been hypothesized that th e patho-cascades of the two diseases overlap. Despite this evidence, these two diseases are rarely studied in a joint manner. In this paper, we utilize clinical, imaging, genetic, and biospecimen features to cluster AD and PD patients into the same feature space. By training a machine learning classifier on the combined feature space, we predict the disease stage of patients two years after their baseline visits. We observed a considerable improvement in the prediction accuracy of Parkinsons dementia patients due to combined training on Alzheimers and Parkinsons patients, thereby affirming the claim that these two diseases can be jointly studied.
Increasing accessibility of data to researchers makes it possible to conduct massive amounts of statistical testing. Rather than follow a carefully crafted set of scientific hypotheses with statistical analysis, researchers can now test many possible relations and let P-values or other statistical summaries generate hypotheses for them. Genetic epidemiology field is an illustrative case in this paradigm shift. Driven by technological advances, testing a handful of genetic variants in relation to a health outcome has been abandoned in favor of agnostic screening of the entire genome, followed by selection of top hits, e.g., by selection of genetic variants with the smallest association P-values. At the same time, nearly total lack of replication of claimed associations that has been shaming the field turned to a flow of reports whose findings have been robustly replicating. Researchers may have adopted better statistical practices by learning from past failures, but we suggest that a steep increase in the amount of statistical testing itself is an important factor. Regardless of whether statistical significance has been reached, an increased number of tested hypotheses leads to enrichment of smallest P-values with genuine associations. In this study, we quantify how the expected proportion of genuine signals (EPGS) among top hits changes with an increasing number of tests. When the rate of occurrence of genuine signals does not decrease too sharply to zero as more tests are performed, the smallest P-values are increasingly more likely to represent genuine associations in studies with more tests.
We consider multivariate two-sample tests of means, where the location shift between the two populations is expected to be related to a known graph structure. An important application of such tests is the detection of differentially expressed genes b etween two patient populations, as shifts in expression levels are expected to be coherent with the structure of graphs reflecting gene properties such as biological process, molecular function, regulation, or metabolism. For a fixed graph of interest, we demonstrate that accounting for graph structure can yield more powerful tests under the assumption of smooth distribution shift on the graph. We also investigate the identification of non-homogeneous subgraphs of a given large graph, which poses both computational and multiple testing problems. The relevance and benefits of the proposed approach are illustrated on synthetic data and on breast cancer gene expression data analyzed in context of KEGG pathways.
According to this principle, the relativistic changes occurring to the bodies, after velocity changes, cannot be detected by observers moving with them because bodies and stationary radiations change in identical proportion after identical circumstan ces, i.e, because bodies and stationary radiations have identical relativistic laws with respect to any fixed observer. Effectively the theoretical properties of particle models made up of stationary radiations agree with special relativity, quantum mechanics and the gravitational (G) tests. They fix lineal properties for all of them: the G fields, the black holes (BHs) and the universe. The BHs, after absorbing radiation, must return to the gas state. An eventual universe expansion cannot change any relative distance because the G expansion of matter occurs in identical proportion. This fixes a new kind of universe. In it matter evolves in closed cycles, between gas and BH states and vice versa, indefinitely. Galaxies and clusters must evolve rather cyclically between luminous and black states. Most of the G potential energy of a matter cycle must be released around neutron star and black hole boundaries. Nuclear stripping reactions would transform G energy into nuclear and kinetic energies. This accounts for many non well explained phenomena in astrophysics. This work has been published, in more detail, in a book.
Most studies indicate that intelligence (g) is positively correlated with cortical thickness. However, the interindividual variability of cortical thickness has not been taken into account. In this study, we aimed to identify the association between intelligence and cortical thickness in adolescents from both the groups mean and dispersion point of view, utilizing the structural brain imaging from the Adolescent Brain and Cognitive Development (ABCD) Consortium, the largest cohort in early adolescents around 10 years old. The mean and dispersion parameters of cortical thickness and their association with intelligence were estimated using double generalized linear models(DGLM). We found that for the mean model part, the thickness of the frontal lobe like superior frontal gyrus was negatively related to intelligence, while the surface area was most positively associated with intelligence in the frontal lobe. In the dispersion part, intelligence was negatively correlated with the dispersion of cortical thickness in widespread areas, but not with the surface area. These results suggested that people with higher IQ are more similar in cortical thickness, which may be related to less differentiation or heterogeneity in cortical columns.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا