Do you want to publish a course? Click here

The Memory of Science: Inflation, Myopia, and the Knowledge Network

69   0   0.0 ( 0 )
 Added by Alexander Petersen
 Publication date 2016
and research's language is English




Ask ChatGPT about the research

Science is a growing system, exhibiting ~4% annual growth in publications and ~1.8% annual growth in the number of references per publication. Combined these trends correspond to a 12-year doubling period in the total supply of references, thereby challenging traditional methods of evaluating scientific production, from researchers to institutions. Against this background, we analyzed a citation network comprised of 837 million references produced by 32.6 million publications over the period 1965-2012, allowing for a temporal analysis of the `attention economy in science. Unlike previous studies, we analyzed the entire probability distribution of reference ages - the time difference between a citing and cited paper - thereby capturing previously overlooked trends. Over this half-century period we observe a narrowing range of attention - both classic and recent literature are being cited increasingly less, pointing to the important role of socio-technical processes. To better understand the impact of exponential growth on the underlying knowledge network we develop a network-based model, featuring the redirection of scientific attention via publications reference lists, and validate the model against several empirical benchmarks. We then use the model to test the causal impact of real paradigm shifts, thereby providing guidance for science policy analysis. In particular, we show how perturbations to the growth rate of scientific output affects the reference age distribution and the functionality of the vast science citation network as an aid for the search & retrieval of knowledge. In order to account for the inflation of science, our study points to the need for a systemic overhaul of the counting methods used to evaluate citation impact - especially in the case of evaluating science careers, which can span several decades and thus several doubling periods.



rate research

Read More

The ability to confront new questions, opportunities, and challenges is of fundamental importance to human progress and the resilience of human societies, yet the capacity of science to meet new demands remains poorly understood. Here we deploy a new measurement framework to investigate the scientific response to the COVID-19 pandemic and the adaptability of science as a whole. We find that science rapidly shifted to engage COVID-19 following the advent of the virus, with scientists across all fields making large jumps from their prior research streams. However, this adaptive response reveals a pervasive pivot penalty, where the impact of the new research steeply declines the further the scientists move from their prior work. The pivot penalty is severe amidst COVID-19 research, but it is not unique to COVID-19. Rather it applies nearly universally across the sciences, and has been growing in magnitude over the past five decades. While further features condition pivoting, including a scientists career stage, prior expertise and impact, collaborative scale, the use of new coauthors, and funding, we find that the pivot penalty persists and remains substantial regardless of these features, suggesting the pivot penalty acts as a fundamental friction that governs sciences ability to adapt. The pivot penalty not only holds key implications for the design of the scientific system and human capacity to confront emergent challenges through scientific advance, but may also be relevant to other social and economic systems, where shifting to meet new demands is central to survival and success.
Knowledge of how science is consumed in public domains is essential for a deeper understanding of the role of science in human society. While science is heavily supported by public funding, common depictions suggest that scientific research remains an isolated or ivory tower activity, with weak connectivity to public use, little relationship between the quality of research and its public use, and little correspondence between the funding of science and its public use. This paper introduces a measurement framework to examine public good features of science, allowing us to study public uses of science, the public funding of science, and how use and funding relate. Specifically, we integrate five large-scale datasets that link scientific publications from all scientific fields to their upstream funding support and downstream public uses across three public domains - government documents, the news media, and marketplace invention. We find that the public uses of science are extremely diverse, with different public domains drawing distinctively across scientific fields. Yet amidst these differences, we find key forms of alignment in the interface between science and society. First, despite concerns that the public does not engage high-quality science, we find universal alignment, in each scientific field and public domain, between what the public consumes and what is highly impactful within science. Second, despite myriad factors underpinning the public funding of science, the resulting allocation across fields presents a striking alignment with the fields collective public use. Overall, public uses of science present a rich landscape of specialized consumption, yet collectively science and society interface with remarkable, quantifiable alignment between scientific use, public use, and funding.
We look at the network of mathematicians defined by the hyperlinks between their biographies on Wikipedia. We show how to extract this information using three snapshots of the Wikipedia data, taken in 2013, 2017 and 2018. We illustrate how such Wikipedia data can be used by performing a centrality analysis. These measures show that Hilbert and Newton are the most important mathematicians. We use our example to illustrate the strengths and weakness of centrality measures and to show how to provide estimates of the robustness of centrality measurements. In part, we do this by comparison to results from two other sources: an earlier study of biographies on the MacTutor website and a small informal survey of the opinion of mathematics and physics students at Imperial College London.
We analyzed the longitudinal activity of nearly 7,000 editors at the mega-journal PLOS ONE over the 10-year period 2006-2015. Using the article-editor associations, we develop editor-specific measures of power, activity, article acceptance time, citation impact, and editorial renumeration (an analogue to self-citation). We observe remarkably high levels of power inequality among the PLOS ONE editors, with the top-10 editors responsible for 3,366 articles -- corresponding to 2.4% of the 141,986 articles we analyzed. Such high inequality levels suggest the presence of unintended incentives, which may reinforce unethical behavior in the form of decision-level biases at the editorial level. Our results indicate that editors may become apathetic in judging the quality of articles and susceptible to modes of power-driven misconduct. We used the longitudinal dimension of editor activity to develop two panel regression models which test and verify the presence of editor-level bias. In the first model we analyzed the citation impact of articles, and in the second model we modeled the decision time between an article being submitted and ultimately accepted by the editor. We focused on two variables that represent social factors that capture potential conflicts-of-interest: (i) we accounted for the social ties between editors and authors by developing a measure of repeat authorship among an editors article set, and (ii) we accounted for the rate of citations directed towards the editors own publications in the reference list of each article he/she oversaw. Our results indicate that these two factors play a significant role in the editorial decision process. Moreover, these two effects appear to increase with editor age, which is consistent with behavioral studies concerning the evolution of misbehavior and response to temptation in power-driven environments.
Citation prediction of scholarly papers is of great significance in guiding funding allocations, recruitment decisions, and rewards. However, little is known about how citation patterns evolve over time. By exploring the inherent involution property in scholarly paper citation, we introduce the Paper Potential Index (PPI) model based on four factors: inherent quality of scholarly paper, scholarly paper impact decaying over time, early citations, and early citers impact. In addition, by analyzing factors that drive citation growth, we propose a multi-feature model for impact prediction. Experimental results demonstrate that the two models improve the accuracy in predicting scholarly paper citations. Compared to the multi-feature model, the PPI model yields superior predictive performance in terms of range-normalized RMSE. The PPI model better interprets the changes in citation, without the need to adjust parameters. Compared to the PPI model, the multi-feature model performs better prediction in terms of Mean Absolute Percentage Error and Accuracy; however, their predictive performance is more dependent on the parameter adjustment.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا