Do you want to publish a course? Click here

Crediting multi-authored papers to single authors

308   0   0.0 ( 0 )
 Added by Philip Hofmann
 Publication date 2019
and research's language is English




Ask ChatGPT about the research

A fair assignment of credit for multi-authored publications is a long-standing issue in scientometrics. In the calculation of the $h$-index, for instance, all co-authors receive equal credit for a given publication, independent of a given authors contribution to the work or of the total number of co-authors. Several attempts have been made to distribute the credit in a more appropriate manner. In a recent paper, Hirsch has suggested a new way of credit assignment that is fundamentally different from the previous ones: All credit for a multi-author paper goes to a single author, the called ``$alpha$-author, defined as the person with the highest current $h$-index not the highest $h$-index at the time of the papers publication) (J. E. Hirsch, Scientometrics 118, 673 (2019)). The collection of papers this author has received credit for as $alpha$-author is then used to calculate a new index, $h_{alpha}$, following the same recipe as for the usual $h$ index. The objective of this new assignment is not a fairer distribution of credit, but rather the determination of an altogether different property, the degree of a persons scientific leadership. We show that given the complex time dependence of $h$ for individual scientists, the approach of using the current $h$ value instead of the historic one is problematic, and we argue that it would be feasible to determine the $alpha$-author at the time of the papers publication instead. On the other hand, there are other practical considerations that make the calculation of the proposed $h_{alpha}$ very difficult. As an alternative, we explore other ways of crediting papers to a single author in order to test early career achievement or scientific leadership.



rate research

Read More

This study aims to analyze 343 retraction notices indexed in the Scopus database, published in 2001-2019, related to scientific articles (co-)written by at least one author affiliated with an Iranian institution. In order to determine reasons for retractions, we merged this database with the database from Retraction Watch. The data were analyzed using Excel 2016 and IBM-SPSS version 24.0, and visualized using VOSviewer software. Most of the retractions were due to fake peer review (95 retractions) and plagiarism (90). The average time between a publication and its retraction was 591 days. The maximum time-lag (about 3,000 days) occurred for papers retracted due to duplicate publications; the minimum time-lag (fewer than 100 days) was for papers retracted due to unspecified cause (most of these were conference papers). As many as 48 (14%) of the retracted papers were published in two medical journals: Tumor Biology (25 papers) and Diagnostic Pathology (23 papers). From the institutional point of view, Islamic Azad University was the inglorious leader, contributing to over one-half (53.1%) of retracted papers. Among the 343 retraction notices, 64 papers pertained to international collaborations with researchers from mainly Asian and European countries; Malaysia having the most retractions (22 papers). Since most retractions were due to fake peer review and plagiarism, the peer review system appears to be a weak point of the submission/publication process; if improved, the number of retractions would likely drop because of increased editorial control.
The web application presented in this paper allows for an analysis to reveal centres of excellence in different fields worldwide using publication and citation data. Only specific aspects of institutional performance are taken into account and other aspects such as teaching performance or societal impact of research are not considered. Based on data gathered from Scopus, field-specific excellence can be identified in institutions where highly-cited papers have been frequently published. The web application combines both a list of institutions ordered by different indicator values and a map with circles visualizing indicator values for geocoded institutions. Compared to the mapping and ranking approaches introduced hitherto, our underlying statistics (multi-level models) are analytically oriented by allowing (1) the estimation of values for the number of excellent papers for an institution which are statistically more appropriate than the observed values; (2) the calculation of confidence intervals as measures of accuracy for the institutional citation impact; (3) the comparison of a single institution with an average institution in a subject area, and (4) the direct comparison of at least two institutions.
There is demand from science funders, industry, and the public that science should become more risk-taking, more out-of-the-box, and more interdisciplinary. Is it possible to tell how interdisciplinary and out-of-the-box scientific papers are, or which papers are mainstream? Here we use the bibliographic coupling network, derived from all physics papers that were published in the Physical Review journals in the past century, to try to identify them as mainstream, out-of-the-box, or interdisciplinary. We show that the network clusters into scientific fields. The position of individual papers with respect to these clusters allows us to estimate their degree of mainstreamness or interdisciplinary. We show that over the past decades the fraction of mainstream papers increases, the fraction of out-of-the-box decreases, and the fraction of interdisciplinary papers remains constant. Studying the rewards of papers, we find that in terms of absolute citations, both, mainstream and interdisciplinary papers are rewarded. In the long run, mainstream papers perform less than interdisciplinary ones in terms of citation rates. We conclude that to avoid a trend towards mainstreamness a new incentive scheme is necessary.
In over five years, Bornmann, Stefaner, de Moya Anegon, and Mutz (2014) and Bornmann, Stefaner, de Moya Anegon, and Mutz (2014, 2015) have published several releases of the www.excellencemapping.net tool revealing (clusters of) excellent institutions worldwide based on citation data. With the new release, a completely revised tool has been published. It is not only based on citation data (bibliometrics), but also Mendeley data (altmetrics). Thus, the institutional impact measurement of the tool has been expanded by focusing on additional status groups besides researchers such as students and librarians. Furthermore, the visualization of the data has been completely updated by improving the operability for the user and including new features such as institutional profile pages. In this paper, we describe the datasets for the current excellencemapping.net tool and the indicators applied. Furthermore, the underlying statistics for the tool and the use of the web application are explained.
174 - Naman Jain , Mayank Singh 2021
Nowadays, researchers have moved to platforms like Twitter to spread information about their ideas and empirical evidence. Recent studies have shown that social media affects the scientific impact of a paper. However, these studies only utilize the tweet counts to represent Twitter activity. In this paper, we propose TweetPap, a large-scale dataset that introduces temporal information of citation/tweets and the metadata of the tweets to quantify and understand the discourse of scientific papers on social media. The dataset is publicly available at https://github.com/lingo-iitgn/TweetPap
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا