ترغب بنشر مسار تعليمي؟ اضغط هنا

Probability and expected frequency of breakthroughs - a robust method of research assessment based on the double rank property of citation distributions

80   0   0.0 ( 0 )
 نشر من قبل Ricardo Brito
 تاريخ النشر 2018
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

In research policy, effective measures that lead to improvements in the generation of knowledge must be based on reliable methods of research assessment, but for many countries and institutions this is not the case. Publication and citation analyses can be used to estimate the part played by countries and institutions in the global progress of knowledge, but a concrete method of estimation is far from evident. The challenge arises because publications that report real progress of knowledge form an extremely low proportion of all publications; in most countries and institutions such contributions appear less than once per year. One way to overcome this difficulty is to calculate probabilities instead of counting the rare events on which scientific progress is based. This study reviews and summarizes several recent publications, and adds new results that demonstrate that the citation distribution of normal publications allows the probability of the infrequent events that support the progress of knowledge to be calculated.



قيم البحث

اقرأ أيضاً

Bibliometrics provides accurate, cheap and simple descriptions of research systems and should lay the foundations for research policy. However, disconnections between bibliometric knowledge and research policy frequently misguide the research policy in many countries. A way of correcting these disconnections might come from the use of simple indicators of research performance. One such simple indicator is the number of highly cited researchers, which can be used under the assumption that a research system that produces and employs many highly cited researchers will be more successful than others with fewer of them. Here, we validate the use of the number of highly cited researchers (Ioannidis et al. 2020; PLoS Biol 18(10): e3000918) for research assessment at the country level and determine a country ranking of research success. We also demonstrate that the number of highly cited researchers reported by Clarivate Analytics is also an indicator of the research success of countries. The formal difference between the numbers of highly cited researchers according to Ionannidis et al. and Clarivate Analytics is that evaluations based on these two lists of highly cited researchers are approximately equivalent to evaluations based on the top 5% and 0.05% of highly cited papers, respectively. Moreover, the Clarivate Analytics indicator is flawed in some countries.
Accessibility research sits at the junction of several disciplines, drawing influence from HCI, disability studies, psychology, education, and more. To characterize the influences and extensions of accessibility research, we undertake a study of cita tion trends for accessibility and related HCI communities. We assess the diversity of venues and fields of study represented among the referenced and citing papers of 836 accessibility research papers from ASSETS and CHI, finding that though publications in computer science dominate these citation relationships, the relative proportion of citations from papers on psychology and medicine has grown over time. Though ASSETS is a more niche venue than CHI in terms of citational diversity, both conferences display standard levels of diversity among their incoming and outgoing citations when analyzed in the context of 53K papers from 13 accessibility and HCI conference venues.
168 - Jian Du , Xiaoli Tang , 2015
F1000 recommendations have been validated as a potential data source for research evaluation, but reasons for differences between F1000 Article Factor (FFa scores) and citations remain to be explored. By linking 28254 publications in F1000 to citatio ns in Scopus, we investigated the effect of research level and article type on the internal consistency of assessments based on citations and FFa scores. It turns out that research level has little impact, while article type has big effect on the differences. These two measures are significantly different for two groups: non-primary research or evidence-based research publications are more highly cited rather than highly recommended, however, translational research or transformative research publications are more highly recommended by faculty members but gather relatively lower citations. This can be expected because citation activities are usually practiced by academic authors while the potential for scientific revolutions and the suitability for clinical practice of an article should be investigated from the practitioners points of view. We conclude with a policy relevant recommendation that the application of bibliometric approaches in research evaluation procedures should include the proportion of three types of publications: evidence-based research, transformative research, and translational research. The latter two types are more suitable to be assessed through peer review.
160 - Zewen Hu , Yishan Wu 2015
Empirical analysis results about the possible causes leading to non-citation may help increase the potential of researchers work to be cited and editorial staffs of journals to identify contributions with potential high quality. In this study, we con duct a survey on the possible causes leading to citation or non-citation based on a questionnaire. We then perform a statistical analysis to identify the major causes leading to non-citation in combination with the analysis on the data collected through the survey. Most respondents to our questionnaire identified eight major causes that facilitate easy citation of ones papers, such as research hotspots and novel topics of content, longer intervals after publication, research topics similar to my work, high quality of content, reasonable self-citation, highlighted title, prestigious authors, academic tastes and interests similar to mine.They also pointed out that the vast difference between their current and former research directions as the primary reason for their previously uncited papers. They feel that text that includes notes, comments, and letters to editors are rarely cited, and the same is true for too short or too lengthy papers. In comparison, it is easier for reviews, articles, or papers of intermediate length to be cited.
Properties of a percentile-based rating scale needed in bibliometrics are formulated. Based on these properties, P100 was recently introduced as a new citation-rank approach (Bornmann, Leydesdorff, & Wang, in press). In this paper, we conceptualize P 100 and propose an improvement which we call P100_. Advantages and disadvantages of citation-rank indicators are noted.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا