No Arabic abstract
In the recent article Janosov, Battiston, & Sinatra report that they separated the inputs of talent and luck in creative careers. They build on the previous work of Sinatra et al which introduced the Q-model. Under the model the popularity of different elements of culture is a product of two factors: a random factor and a Qfactor, or talent. The latter is fixed for an individual but randomly distributed among different people. This way they explain how some individuals can consistently produce high-impact work. They extract the Q-factors for different scientists, writers, and movie makers from statistical data on popularity of their work. However, in their article they reluctantly state that there is little correlation between popularity and quality ratings of of books and movies (correlation coefficients 0.022 and 0.15). I analyzed the data of the original Q-factor article and obtained a correlation between the citation-based Q-factor and Nobel Prize winning of merely 0.19. I also briefly review few other experiments that found a meager, sometimes even negative, correlation between popularity and quality of cultural products. I conclude that, if there is an ability associated with a high Q-factor it should be more of a marketing ability than an ability to produce a higher quality product. Janosov,
Quantifying success in science plays a key role in guiding funding allocations, recruitment decisions, and rewards. Recently, a significant amount of progresses have been made towards quantifying success in science. This lack of detailed analysis and summary continues a practical issue. The literature reports the factors influencing scholarly impact and evaluation methods and indices aimed at overcoming this crucial weakness. We focus on categorizing and reviewing the current development on evaluation indices of scholarly impact, including paper impact, scholar impact, and journal impact. Besides, we summarize the issues of existing evaluation methods and indices, investigate the open issues and challenges, and provide possible solutions, including the pattern of collaboration impact, unified evaluation standards, implicit success factor mining, dynamic academic network embedding, and scholarly impact inflation. This paper should help the researchers obtaining a broader understanding of quantifying success in science, and identifying some potential research directions.
There is extensive, yet fragmented, evidence of gender differences in academia suggesting that women are under-represented in most scientific disciplines, publish fewer articles throughout a career, and their work acquires fewer citations. Here, we offer a comprehensive picture of longitudinal gender discrepancies in performance through a bibliometric analysis of academic careers by reconstructing the complete publication history of over 1.5 million gender-identified authors whose publishing career ended between 1955 and 2010, covering 83 countries and 13 disciplines. We find that, paradoxically, the increase of participation of women in science over the past 60 years was accompanied by an increase of gender differences in both productivity and impact. Most surprisingly though, we uncover two gender invariants, finding that men and women publish at a comparable annual rate and have equivalent career-wise impact for the same size body of work. Finally, we demonstrate that differences in dropout rates and career length explain a large portion of the reported career-wise differences in productivity and impact. This comprehensive picture of gender inequality in academia can help rephrase the conversation around the sustainability of womens careers in academia, with important consequences for institutions and policy makers.
Academic papers have been the protagonists in disseminating expertise. Naturally, paper citation pattern analysis is an efficient and essential way of investigating the knowledge structure of science and technology. For decades, it has been observed that citation of scientific literature follows a heterogeneous and heavy-tailed distribution, and many of them suggest a power-law distribution, log-normal distribution, and related distributions. However, many studies are limited to small-scale approaches; therefore, it is hard to generalize. To overcome this problem, we investigate 21 years of citation evolution through a systematic analysis of the entire citation history of 42,423,644 scientific literatures published from 1996 to 2016 and contained in SCOPUS. We tested six candidate distributions for the scientific literature in three distinct levels of Scimago Journal & Country Rank (SJR) classification scheme. First, we observe that the raw number of annual citation acquisitions tends to follow the log-normal distribution for all disciplines, except for the first year of the publication. We also find significant disparity between the yearly acquired citation number among the journals, which suggests that it is essential to remove the citation surplus inherited from the prestige of the journals. Our simple method for separating the citation preference of an individual article from the inherited citation of the journals reveals an unexpected regularity in the normalized annual acquisitions of citations across the entire field of science. Specifically, the normalized annual citation acquisitions have power-law probability distributions with an exponential cut-off of the exponents around 2.3, regardless of its publication and citation year. Our results imply that journal reputation has a substantial long-term impact on the citation.
In over five years, Bornmann, Stefaner, de Moya Anegon, and Mutz (2014) and Bornmann, Stefaner, de Moya Anegon, and Mutz (2014, 2015) have published several releases of the www.excellencemapping.net tool revealing (clusters of) excellent institutions worldwide based on citation data. With the new release, a completely revised tool has been published. It is not only based on citation data (bibliometrics), but also Mendeley data (altmetrics). Thus, the institutional impact measurement of the tool has been expanded by focusing on additional status groups besides researchers such as students and librarians. Furthermore, the visualization of the data has been completely updated by improving the operability for the user and including new features such as institutional profile pages. In this paper, we describe the datasets for the current excellencemapping.net tool and the indicators applied. Furthermore, the underlying statistics for the tool and the use of the web application are explained.
Researchers affiliated with multiple institutions are increasingly seen in current scientific environment. In this paper we systematically analyze the multi-affiliated authorship and its effect on citation impact, with focus on the scientific output of research collaboration. By considering the nationality of each institutions, we further differentiate the national multi-affiliated authorship and international multi-affiliated authorship and reveal their different patterns across disciplines and countries. We observe a large share of publications with multi-affiliated authorship (45.6%) in research collaboration, with a larger share of publications containing national multi-affiliated authorship in medicine related and biology related disciplines, and a larger share of publications containing international type in Space Science, Physics and Geosciences. To a country-based view, we distinguish between domestic and foreign multi-affiliated authorship to a specific country. Taking G7 and BRICS countries as samples from different S&T level, we find that the domestic national multi-affiliated authorship relate to more on citation impact for most disciplines of G7 countries, while domestic international multi-affiliated authorships are more positively influential for most BRICS countries.