ترغب بنشر مسار تعليمي؟ اضغط هنا

Validated research assessment based on highly cited researchers

174   0   0.0 ( 0 )
 نشر من قبل Ricardo Brito
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Bibliometrics provides accurate, cheap and simple descriptions of research systems and should lay the foundations for research policy. However, disconnections between bibliometric knowledge and research policy frequently misguide the research policy in many countries. A way of correcting these disconnections might come from the use of simple indicators of research performance. One such simple indicator is the number of highly cited researchers, which can be used under the assumption that a research system that produces and employs many highly cited researchers will be more successful than others with fewer of them. Here, we validate the use of the number of highly cited researchers (Ioannidis et al. 2020; PLoS Biol 18(10): e3000918) for research assessment at the country level and determine a country ranking of research success. We also demonstrate that the number of highly cited researchers reported by Clarivate Analytics is also an indicator of the research success of countries. The formal difference between the numbers of highly cited researchers according to Ionannidis et al. and Clarivate Analytics is that evaluations based on these two lists of highly cited researchers are approximately equivalent to evaluations based on the top 5% and 0.05% of highly cited papers, respectively. Moreover, the Clarivate Analytics indicator is flawed in some countries.

قيم البحث

اقرأ أيضاً

The web application presented in this paper allows for an analysis to reveal centres of excellence in different fields worldwide using publication and citation data. Only specific aspects of institutional performance are taken into account and other aspects such as teaching performance or societal impact of research are not considered. Based on data gathered from Scopus, field-specific excellence can be identified in institutions where highly-cited papers have been frequently published. The web application combines both a list of institutions ordered by different indicator values and a map with circles visualizing indicator values for geocoded institutions. Compared to the mapping and ranking approaches introduced hitherto, our underlying statistics (multi-level models) are analytically oriented by allowing (1) the estimation of values for the number of excellent papers for an institution which are statistically more appropriate than the observed values; (2) the calculation of confidence intervals as measures of accuracy for the institutional citation impact; (3) the comparison of a single institution with an average institution in a subject area, and (4) the direct comparison of at least two institutions.
Many altmetric studies analyze which papers were mentioned how often in specific altmetrics sources. In order to study the potential policy relevance of tweets from another perspective, we investigate which tweets were cited in papers. If many tweets were cited in publications, this might demonstrate that tweets have substantial and useful content. Overall, a rather low number of tweets (n=5506) were cited by less than 3000 papers. Most tweets do not seem to be cited because of any cognitive influence they might have had on studies; they rather were study objects. Most of the papers citing tweets are from the subject areas Social Sciences, Arts and Humanities, and Computer Sciences. Most of the papers cited only one tweet. Up to 55 tweets cited in a single paper were found. This research-in-progress does not support a high policy-relevance of tweets. However, a content analysis of the tweets and/or papers might lead to a more detailed conclusion.
In research policy, effective measures that lead to improvements in the generation of knowledge must be based on reliable methods of research assessment, but for many countries and institutions this is not the case. Publication and citation analyses can be used to estimate the part played by countries and institutions in the global progress of knowledge, but a concrete method of estimation is far from evident. The challenge arises because publications that report real progress of knowledge form an extremely low proportion of all publications; in most countries and institutions such contributions appear less than once per year. One way to overcome this difficulty is to calculate probabilities instead of counting the rare events on which scientific progress is based. This study reviews and summarizes several recent publications, and adds new results that demonstrate that the citation distribution of normal publications allows the probability of the infrequent events that support the progress of knowledge to be calculated.
Many studies in information science have looked at the growth of science. In this study, we re-examine the question of the growth of science. To do this we (i) use current data up to publication year 2012 and (ii) analyse it across all disciplines an d also separately for the natural sciences and for the medical and health sciences. Furthermore, the data are analysed with an advanced statistical technique - segmented regression analysis - which can identify specific segments with similar growth rates in the history of science. The study is based on two different sets of bibliometric data: (1) The number of publications held as source items in the Web of Science (WoS, Thomson Reuters) per publication year and (2) the number of cited references in the publications of the source items per cited reference year. We have looked at the rate at which science has grown since the mid-1600s. In our analysis of cited references we identified three growth phases in the development of science, which each led to growth rates tripling in comparison with the previous phase: from less than 1% up to the middle of the 18th century, to 2 to 3% up to the period between the two world wars and 8 to 9% to 2012.
160 - Ye Sun , Giacomo Livan , Athen Ma 2021
Interdisciplinary research is fundamental when it comes to tackling complex problems in our highly interlinked world, and is on the rise globally. Yet, it is unclear why--in an increasingly competitive academic environment--one should pursue an inter disciplinary career given its recent negative press. Several studies have indeed shown that interdisciplinary research often achieves lower impact compared to more specialized work, and is less likely to attract funding. We seek to reconcile such evidence by analyzing a dataset of 44,419 research grants awarded between 2006 and 2018 from the seven national research councils in the UK. We compared the research performance of researchers with an interdisciplinary funding track record with those who have a specialized profile. We found that the former dominates the network of academic collaborations, both in terms of centrality and knowledge brokerage; but such a competitive advantage does not immediately translate into impact. Indeed, by means of a matched pair experimental design, we found that researchers who transcend between disciplines on average achieve lower impacts in their publications than the subject specialists in the short run, but eventually outperform them in funding performance, both in terms of volume and value. Our results suggest that launching an interdisciplinary career may require more time and persistence to overcome extra challenges, but can pave the way for a more successful endeavour.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا