ترغب بنشر مسار تعليمي؟ اضغط هنا

Cui Prodest? Reciprocity of collaboration measured by Russian Index of Science Citation

76   0   0.0 ( 0 )
 نشر من قبل Vladimir Pislyakov
 تاريخ النشر 2019
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Scientific collaboration is often not perfectly reciprocal. Scientifically strong countries/institutions/laboratories may help their less prominent partners with leading scholars, or finance, or other resources. What is interesting in such type of collaboration is that (1) it may be measured by bibliometrics and (2) it may shed more light on the scholarly level of both collaborating organizations themselves. In this sense measuring institutions in collaboration sometimes may tell more than attempts to assess them as stand-alone organizations. Evaluation of collaborative patterns was explained in detail, for example, by Glanzel (2001; 2003). Here we combine these methods with a new one, made available by separating the best journals from others on the same platform of Russian Index of Science Citation (RISC). Such sub-universes of journals from different leagues provide additional methods to study how collaboration influences the quality of papers published by organizations.



قيم البحث

اقرأ أيضاً

The paper citation network is a traditional social medium for the exchange of ideas and knowledge. In this paper we view citation networks from the perspective of information diffusion. We study the structural features of the information paths throug h the citation networks of publications in computer science, and analyze the impact of various citation choices on the subsequent impact of the article. We find that citing recent papers and papers within the same scholarly community garners a slightly larger number of citations on average. However, this correlation is weaker among well-cited papers implying that for high impact work citing within ones field is of lesser importance. We also study differences in information flow for specific subsets of citation networks: books versus conference and journal articles, different areas of computer science, and different time periods.
Our current knowledge of scholarly plagiarism is largely based on the similarity between full text research articles. In this paper, we propose an innovative and novel conceptualization of scholarly plagiarism in the form of reuse of explicit citatio n sentences in scientific research articles. Note that while full-text plagiarism is an indicator of a gross-level behavior, copying of citation sentences is a more nuanced micro-scale phenomenon observed even for well-known researchers. The current work poses several interesting questions and attempts to answer them by empirically investigating a large bibliographic text dataset from computer science containing millions of lines of citation sentences. In particular, we report evidences of massive copying behavior. We also present several striking real examples throughout the paper to showcase widespread adoption of this undesirable practice. In contrast to the popular perception, we find that copying tendency increases as an author matures. The copying behavior is reported to exist in all fields of computer science; however, the theoretical fields indicate more copying than the applied fields.
Responsible indicators are crucial for research assessment and monitoring. Transparency and accuracy of indicators are required to make research assessment fair and ensure reproducibility. However, sometimes it is difficult to conduct or replicate st udies based on indicators due to the lack of transparency in conceptualization and operationalization. In this paper, we review the different variants of the Probabilistic Affinity Index (PAI), considering both the conceptual and empirical underpinnings. We begin with a review of the historical development of the indicator and the different alternatives proposed. To demonstrate the utility of the indicator, we demonstrate the application of PAI to identifying preferred partners in scientific collaboration. A streamlined procedure is provided, to demonstrate the variations and appropriate calculations. We then compare the results of implementation for five specific countries involved in international scientific collaboration. Despite the different proposals on its calculation, we do not observe large differences between the PAI variants, particularly with respect to country size. As with any indicator, the selection of a particular variant is dependent on the research question. To facilitate appropriate use, we provide recommendations for the use of the indicator given specific contexts.
The past year has seen movement on several fronts for improving software citation, including the Center for Open Sciences Transparency and Openness Promotion (TOP) Guidelines, the Software Publishing Special Interest Group that was started at January s AAS meeting in Seattle at the request of that organizations Working Group on Astronomical Software, a Sloan-sponsored meeting at GitHub in San Francisco to begin work on a cohesive research software citation-enabling platform, the work of Force11 to transform and improve research communication, and WSSSPEs ongoing efforts that include software publication, citation, credit, and sustainability. Brief reports on these efforts were shared at the BoF, after which participants discussed ideas for improving software citation, generating a list of recommendations to the community of software authors, journal publishers, ADS, and research authors. The discussion, recommendations, and feedback will help form recommendations for software citation to those publishers represented in the Software Publishing Special Interest Group and the broader community.
Inspired by the social and economic benefits of diversity, we analyze over 9 million papers and 6 million scientists to study the relationship between research impact and five classes of diversity: ethnicity, discipline, gender, affiliation, and acad emic age. Using randomized baseline models, we establish the presence of homophily in ethnicity, gender and affiliation. We then study the effect of diversity on scientific impact, as reflected in citations. Remarkably, of the classes considered, ethnic diversity had the strongest correlation with scientific impact. To further isolate the effects of ethnic diversity, we used randomized baseline models and again found a clear link between diversity and impact. To further support these findings, we use coarsened exact matching to compare the scientific impact of ethnically diverse papers and scientists with closely-matched control groups. Here, we find that ethnic diversity resulted in an impact gain of 10.63% for papers, and 47.67% for scientists.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا