No Arabic abstract
This paper investigates the impact of institutes and papers over time based on the heterogeneous institution-citation network. A new model, IPRank, is introduced to measure the impact of institution and paper simultaneously. This model utilises the heterogeneous structural measure method to unveil the impact of institution and paper, reflecting the effects of citation, institution, and structural measure. To evaluate the performance, the model first constructs a heterogeneous institution-citation network based on the American Physical Society (APS) dataset. Subsequently, PageRank is used to quantify the impact of institution and paper. Finally, impacts of same institution are merged, and the ranking of institutions and papers is calculated. Experimental results show that the IPRank model better identifies universities that host Nobel Prize laureates, demonstrating that the proposed technique well reflects impactful research.
Researchers affiliated with multiple institutions are increasingly seen in current scientific environment. In this paper we systematically analyze the multi-affiliated authorship and its effect on citation impact, with focus on the scientific output of research collaboration. By considering the nationality of each institutions, we further differentiate the national multi-affiliated authorship and international multi-affiliated authorship and reveal their different patterns across disciplines and countries. We observe a large share of publications with multi-affiliated authorship (45.6%) in research collaboration, with a larger share of publications containing national multi-affiliated authorship in medicine related and biology related disciplines, and a larger share of publications containing international type in Space Science, Physics and Geosciences. To a country-based view, we distinguish between domestic and foreign multi-affiliated authorship to a specific country. Taking G7 and BRICS countries as samples from different S&T level, we find that the domestic national multi-affiliated authorship relate to more on citation impact for most disciplines of G7 countries, while domestic international multi-affiliated authorships are more positively influential for most BRICS countries.
In over five years, Bornmann, Stefaner, de Moya Anegon, and Mutz (2014) and Bornmann, Stefaner, de Moya Anegon, and Mutz (2014, 2015) have published several releases of the www.excellencemapping.net tool revealing (clusters of) excellent institutions worldwide based on citation data. With the new release, a completely revised tool has been published. It is not only based on citation data (bibliometrics), but also Mendeley data (altmetrics). Thus, the institutional impact measurement of the tool has been expanded by focusing on additional status groups besides researchers such as students and librarians. Furthermore, the visualization of the data has been completely updated by improving the operability for the user and including new features such as institutional profile pages. In this paper, we describe the datasets for the current excellencemapping.net tool and the indicators applied. Furthermore, the underlying statistics for the tool and the use of the web application are explained.
We have studied the impact of incoming preparation and demographic variables on student performance on the final exam in physics 1, the standard introductory, calculus-based mechanics course This was done at three different institutions using multivariable regression analysis to determine the extent to which exam scores can be predicted by a variety of variables that are available to most faculty and departments. We have found that the results are surprisingly consistent across the institutions, with the only two variables that have predictive power being math SAT/ACT scores and concept inventory pre-scores. The importance of both variables is comparable and fairly similar across the institutions. They explain 20 - 30 percent of the variation in students performance on the final exam. Most notably, the demographic variables (gender, under-represented minority, first generation to attend college) are not significant. In all cases, although there appear to be gaps in exam performance if one considers only the demographic variable, once these two proxies of incoming preparation are included in the model, there is no longer a demographic gap. There is only a preparation gap that applies equally across the entire student population. This work shows that to properly understand differences in student performance across a diverse population, and hence to design more effective instruction, it is important to do statistical analyses that take multiple variables into account. It also illustrates the importance of having measures that are sensitive to both subject specific and more general preparation. The results suggest that better matching of the course design and teaching to the incoming student preparation will likely be the most effective way to eliminate observed performance gaps across demographic groups while also improving the success of all students.
Computing such correlation coefficient would be straightforward had we had available the rankings given by the prize committee to all scientists in the pool. In reality we only have citation rankings for all scientists. This means, however, that we have the ordinal rankings of the prize winners with regard to citation metrics. I use maximum likelihood method to infer the most probable correlation coefficient to produce the observed pattern of ordinal ranks of the prize winners. I get the correlation coefficients of 0.47 and 0.59 between the composite citation indicator and getting Abel Prize and Fields Medal, respectively. The correlation coefficient between getting a Nobel Prize and the Q-factor is 0.65. These coefficients are of the same magnitude as the correlation coefficient between Elo ratings of the chess players and their popularity measured as numbers of webpages mentioning the players.
Academic papers have been the protagonists in disseminating expertise. Naturally, paper citation pattern analysis is an efficient and essential way of investigating the knowledge structure of science and technology. For decades, it has been observed that citation of scientific literature follows a heterogeneous and heavy-tailed distribution, and many of them suggest a power-law distribution, log-normal distribution, and related distributions. However, many studies are limited to small-scale approaches; therefore, it is hard to generalize. To overcome this problem, we investigate 21 years of citation evolution through a systematic analysis of the entire citation history of 42,423,644 scientific literatures published from 1996 to 2016 and contained in SCOPUS. We tested six candidate distributions for the scientific literature in three distinct levels of Scimago Journal & Country Rank (SJR) classification scheme. First, we observe that the raw number of annual citation acquisitions tends to follow the log-normal distribution for all disciplines, except for the first year of the publication. We also find significant disparity between the yearly acquired citation number among the journals, which suggests that it is essential to remove the citation surplus inherited from the prestige of the journals. Our simple method for separating the citation preference of an individual article from the inherited citation of the journals reveals an unexpected regularity in the normalized annual acquisitions of citations across the entire field of science. Specifically, the normalized annual citation acquisitions have power-law probability distributions with an exponential cut-off of the exponents around 2.3, regardless of its publication and citation year. Our results imply that journal reputation has a substantial long-term impact on the citation.