ترغب بنشر مسار تعليمي؟ اضغط هنا

Citation sentence reuse behavior of scientists: A case study on massive bibliographic text dataset of computer science

76   0   0.0 ( 0 )
 نشر من قبل Mayank Singh
 تاريخ النشر 2017
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Our current knowledge of scholarly plagiarism is largely based on the similarity between full text research articles. In this paper, we propose an innovative and novel conceptualization of scholarly plagiarism in the form of reuse of explicit citation sentences in scientific research articles. Note that while full-text plagiarism is an indicator of a gross-level behavior, copying of citation sentences is a more nuanced micro-scale phenomenon observed even for well-known researchers. The current work poses several interesting questions and attempts to answer them by empirically investigating a large bibliographic text dataset from computer science containing millions of lines of citation sentences. In particular, we report evidences of massive copying behavior. We also present several striking real examples throughout the paper to showcase widespread adoption of this undesirable practice. In contrast to the popular perception, we find that copying tendency increases as an author matures. The copying behavior is reported to exist in all fields of computer science; however, the theoretical fields indicate more copying than the applied fields.



قيم البحث

اقرأ أيضاً

The paper citation network is a traditional social medium for the exchange of ideas and knowledge. In this paper we view citation networks from the perspective of information diffusion. We study the structural features of the information paths throug h the citation networks of publications in computer science, and analyze the impact of various citation choices on the subsequent impact of the article. We find that citing recent papers and papers within the same scholarly community garners a slightly larger number of citations on average. However, this correlation is weaker among well-cited papers implying that for high impact work citing within ones field is of lesser importance. We also study differences in information flow for specific subsets of citation networks: books versus conference and journal articles, different areas of computer science, and different time periods.
This paper presents a study that analyzes and gives quantitative means for measuring the gender gap in computing research publications. The data set built for this study is a geo-gender tagged authorship database named authorships that integrates dat a from computing journals indexed in the Journal Citation Reports (JCR) and the Microsoft Academic Graph (MAG). We propose a gender gap index to analyze female and male authors participation gap in JCR publications in Computer Science. Tagging publications with this index, we can classify papers according to the degree of participation of both women and men in different domains. Given that working contexts vary for female scientists depending on the country, our study groups analytics results according to the country of authors affiliation institutions. The paper details the method used to obtain, clean and validate the data, and then it states the hypothesis adopted for defining our index and classifications. Our study results have led to enlightening conclusions concerning various aspects of female authorships geographical distribution in computing JCR publications.
143 - Massimo Franceschet 2009
Computer science is a relatively young discipline combining science, engineering, and mathematics. The main flavors of computer science research involve the theoretical development of conceptual models for the different aspects of computing and the m ore applicative building of software artifacts and assessment of their properties. In the computer science publication culture, conferences are an important vehicle to quickly move ideas, and journals often publish deep
Without sufficient information about researchers data sharing, there is a risk of mismatching FAIR data service efforts with the needs of researchers. This study describes a methodology where departmental publications are used to analyse the ways in which computer scientists share research data. All journal articles published by researchers in the computer science department of the case studys university during 2019 were extracted for scrutiny from the current research information system. For these 193 articles, a coding framework was developed to capture the key elements of acquiring and sharing research data. Furthermore, a rudimentary classification of the main study types exhibited in the investigated articles was developed to accommodate the multidisciplinary nature of the case departments research agenda. Human interaction and intervention studies often collected original data, whereas research on novel computational methods and life sciences more frequently used openly available data. Articles that made data available for reuse were most often in life science studies, whereas data sharing was least frequent in human interaction studies. The use of open code was most frequent in life science studies and novel computational methods. The findings highlight that multidisciplinary research organisations may include diverse subfields that have their own cultures of data sharing, and suggest that research information system-based methods may be valuable additions to the questionnaire and interview methodologies eliciting insight into researchers data sharing. The collected data and coding framework are provided as open data to facilitate future research.
Scientific collaboration is often not perfectly reciprocal. Scientifically strong countries/institutions/laboratories may help their less prominent partners with leading scholars, or finance, or other resources. What is interesting in such type of co llaboration is that (1) it may be measured by bibliometrics and (2) it may shed more light on the scholarly level of both collaborating organizations themselves. In this sense measuring institutions in collaboration sometimes may tell more than attempts to assess them as stand-alone organizations. Evaluation of collaborative patterns was explained in detail, for example, by Glanzel (2001; 2003). Here we combine these methods with a new one, made available by separating the best journals from others on the same platform of Russian Index of Science Citation (RISC). Such sub-universes of journals from different leagues provide additional methods to study how collaboration influences the quality of papers published by organizations.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا