ترغب بنشر مسار تعليمي؟ اضغط هنا

Retracted papers by Iranian authors: Causes, journals, time lags, affiliations, collaborations

177   0   0.0 ( 0 )
 نشر من قبل Marcel Ausloos
 تاريخ النشر 2021
والبحث باللغة English




اسأل ChatGPT حول البحث

This study aims to analyze 343 retraction notices indexed in the Scopus database, published in 2001-2019, related to scientific articles (co-)written by at least one author affiliated with an Iranian institution. In order to determine reasons for retractions, we merged this database with the database from Retraction Watch. The data were analyzed using Excel 2016 and IBM-SPSS version 24.0, and visualized using VOSviewer software. Most of the retractions were due to fake peer review (95 retractions) and plagiarism (90). The average time between a publication and its retraction was 591 days. The maximum time-lag (about 3,000 days) occurred for papers retracted due to duplicate publications; the minimum time-lag (fewer than 100 days) was for papers retracted due to unspecified cause (most of these were conference papers). As many as 48 (14%) of the retracted papers were published in two medical journals: Tumor Biology (25 papers) and Diagnostic Pathology (23 papers). From the institutional point of view, Islamic Azad University was the inglorious leader, contributing to over one-half (53.1%) of retracted papers. Among the 343 retraction notices, 64 papers pertained to international collaborations with researchers from mainly Asian and European countries; Malaysia having the most retractions (22 papers). Since most retractions were due to fake peer review and plagiarism, the peer review system appears to be a weak point of the submission/publication process; if improved, the number of retractions would likely drop because of increased editorial control.

قيم البحث

اقرأ أيضاً

A fair assignment of credit for multi-authored publications is a long-standing issue in scientometrics. In the calculation of the $h$-index, for instance, all co-authors receive equal credit for a given publication, independent of a given authors con tribution to the work or of the total number of co-authors. Several attempts have been made to distribute the credit in a more appropriate manner. In a recent paper, Hirsch has suggested a new way of credit assignment that is fundamentally different from the previous ones: All credit for a multi-author paper goes to a single author, the called ``$alpha$-author, defined as the person with the highest current $h$-index not the highest $h$-index at the time of the papers publication) (J. E. Hirsch, Scientometrics 118, 673 (2019)). The collection of papers this author has received credit for as $alpha$-author is then used to calculate a new index, $h_{alpha}$, following the same recipe as for the usual $h$ index. The objective of this new assignment is not a fairer distribution of credit, but rather the determination of an altogether different property, the degree of a persons scientific leadership. We show that given the complex time dependence of $h$ for individual scientists, the approach of using the current $h$ value instead of the historic one is problematic, and we argue that it would be feasible to determine the $alpha$-author at the time of the papers publication instead. On the other hand, there are other practical considerations that make the calculation of the proposed $h_{alpha}$ very difficult. As an alternative, we explore other ways of crediting papers to a single author in order to test early career achievement or scientific leadership.
Researchers affiliated with multiple institutions are increasingly seen in current scientific environment. In this paper we systematically analyze the multi-affiliated authorship and its effect on citation impact, with focus on the scientific output of research collaboration. By considering the nationality of each institutions, we further differentiate the national multi-affiliated authorship and international multi-affiliated authorship and reveal their different patterns across disciplines and countries. We observe a large share of publications with multi-affiliated authorship (45.6%) in research collaboration, with a larger share of publications containing national multi-affiliated authorship in medicine related and biology related disciplines, and a larger share of publications containing international type in Space Science, Physics and Geosciences. To a country-based view, we distinguish between domestic and foreign multi-affiliated authorship to a specific country. Taking G7 and BRICS countries as samples from different S&T level, we find that the domestic national multi-affiliated authorship relate to more on citation impact for most disciplines of G7 countries, while domestic international multi-affiliated authorships are more positively influential for most BRICS countries.
There is demand from science funders, industry, and the public that science should become more risk-taking, more out-of-the-box, and more interdisciplinary. Is it possible to tell how interdisciplinary and out-of-the-box scientific papers are, or whi ch papers are mainstream? Here we use the bibliographic coupling network, derived from all physics papers that were published in the Physical Review journals in the past century, to try to identify them as mainstream, out-of-the-box, or interdisciplinary. We show that the network clusters into scientific fields. The position of individual papers with respect to these clusters allows us to estimate their degree of mainstreamness or interdisciplinary. We show that over the past decades the fraction of mainstream papers increases, the fraction of out-of-the-box decreases, and the fraction of interdisciplinary papers remains constant. Studying the rewards of papers, we find that in terms of absolute citations, both, mainstream and interdisciplinary papers are rewarded. In the long run, mainstream papers perform less than interdisciplinary ones in terms of citation rates. We conclude that to avoid a trend towards mainstreamness a new incentive scheme is necessary.
In over five years, Bornmann, Stefaner, de Moya Anegon, and Mutz (2014) and Bornmann, Stefaner, de Moya Anegon, and Mutz (2014, 2015) have published several releases of the www.excellencemapping.net tool revealing (clusters of) excellent institutions worldwide based on citation data. With the new release, a completely revised tool has been published. It is not only based on citation data (bibliometrics), but also Mendeley data (altmetrics). Thus, the institutional impact measurement of the tool has been expanded by focusing on additional status groups besides researchers such as students and librarians. Furthermore, the visualization of the data has been completely updated by improving the operability for the user and including new features such as institutional profile pages. In this paper, we describe the datasets for the current excellencemapping.net tool and the indicators applied. Furthermore, the underlying statistics for the tool and the use of the web application are explained.
The web application presented in this paper allows for an analysis to reveal centres of excellence in different fields worldwide using publication and citation data. Only specific aspects of institutional performance are taken into account and other aspects such as teaching performance or societal impact of research are not considered. Based on data gathered from Scopus, field-specific excellence can be identified in institutions where highly-cited papers have been frequently published. The web application combines both a list of institutions ordered by different indicator values and a map with circles visualizing indicator values for geocoded institutions. Compared to the mapping and ranking approaches introduced hitherto, our underlying statistics (multi-level models) are analytically oriented by allowing (1) the estimation of values for the number of excellent papers for an institution which are statistically more appropriate than the observed values; (2) the calculation of confidence intervals as measures of accuracy for the institutional citation impact; (3) the comparison of a single institution with an average institution in a subject area, and (4) the direct comparison of at least two institutions.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا