No Arabic abstract
We present the results of a large-scale study of potentially predatory journals (PPJ) represented in the Scopus database, which is widely used for research evaluation. Both journal metrics and country, disciplinary data have been evaluated for different groups of PPJ: those listed by Jeffrey Beall and those delisted by Scopus because of publication concerns. Our results show that even after years of delisting, PPJ are still highly visible in the Scopus database with hundreds of active potentially predatory journals. PPJ papers are continuously produced by all major countries, but with different shares. All major subject areas are affected. The largest number of PPJ papers are in engineering and medicine. On average, PPJ have much lower citation metrics than other Scopus-indexed journals. We conclude with a brief survey of the case of Kazakhstan where the share of PPJ papers at one time amounted to almost a half of all Kazakhstan papers in Scopus, and propose a link between PPJ share and national research evaluation policies (in particular, rules of awarding academic degrees). The progress of potentially predatory journal research will be increasingly important because such evaluation methods are becoming more widespread in times of the Metric Tide.
This paper presents and describes the methodological opportunities offered by bibliometric data to produce indicators of scientific mobility. Large bibliographic datasets of disambiguated authors and their affiliations allow for the possibility of tracking the affiliation changes of scientists. Using the Web of Science as data source, we analyze the distribution of types of mobile scientists for a selection of countries. We explore the possibility of creating profiles of international mobility at the country level, and discuss potential interpretations and caveats. Five countries (Canada, The Netherlands, South Africa, Spain, and the United States) are used as examples. These profiles enable us to characterize these countries in terms of their strongest links with other countries. This type of analysis reveals circulation among and between countries with strong policy implications.
The purpose of this study is to explore the relationship between the first affiliation and the corresponding affiliation at the different levels via the scientometric analysis We select over 18 million papers in the core collection database of Web of Science (WoS) published from 2000 to 2015, and measure the percentage of match between the first and the corresponding affiliation at the country and institution level. We find that a papers the first affiliation and the corresponding affiliation are highly consistent at the country level, with over 98% of the match on average. However, the match at the institution level is much lower, which varies significantly with time and country. Hence, for studies at the country level, using the first and corresponding affiliations are almost the same. But we may need to take more cautions to select affiliation when the institution is the focus of the investigation. In the meanwhile, we find some evidence that the recorded corresponding information in the WoS database has undergone some changes since 2013, which sheds light on future studies on the comparison of different databases or the affiliation accuracy of WoS. Our finding relies on the records of WoS, which may not be entirely accurate. Given the scale of the analysis, our findings can serve as a useful reference for further studies when country allocation or institute allocation is needed. Existing studies on comparisons of straight counting methods usually cover a limited number of papers, a particular research field or a limited range of time. More importantly, using the number counted can not sufficiently tell if the corresponding and first affiliation are similar. This paper uses a metric similar to Jaccard similarity to measure the percentage of the match and performs a comprehensive analysis based on a large-scale bibliometric database.
Digital advancement in scholarly repositories has led to the emergence of a large number of open access predatory publishers that charge high article processing fees from authors but fail to provide necessary editorial and publishing services. Identifying and blacklisting such publishers has remained a research challenge due to the highly volatile scholarly publishing ecosystem. This paper presents a data-driven approach to study how potential predatory publishers are evolving and bypassing several regularity constraints. We empirically show the close resemblance of predatory publishers against reputed publishing groups. In addition to verifying standard constraints, we also propose distinctive signals gathered from network-centric properties to understand this evolving ecosystem better.
Questionable publications have been accused of greedy practices; however, their influence on academia has not been gauged. Here, we probe the impact of questionable publications through a systematic and comprehensive analysis with various participants from academia and compare the results with those of their unaccused counterparts using billions of citation records, including liaisons, e.g., journals and publishers, and prosumers, e.g., authors. The analysis reveals that questionable publications embellished their citation scores by attributing publisher-level self-citations to their journals while also controlling the journal-level self-citations to circumvent the evaluation of journal-indexing services. This approach makes it difficult to detect malpractice by conventional journal-level metrics. We propose journal-publisher-hybrid metric that help detect malpractice. We also demonstrate that the questionable publications had a weaker disruptiveness and influence than their counterparts. This indicates the negative effect of suspicious publishers in the academia. The findings provide a basis for actionable policy making against questionable publications.
Many altmetric studies analyze which papers were mentioned how often in specific altmetrics sources. In order to study the potential policy relevance of tweets from another perspective, we investigate which tweets were cited in papers. If many tweets were cited in publications, this might demonstrate that tweets have substantial and useful content. Overall, a rather low number of tweets (n=5506) were cited by less than 3000 papers. Most tweets do not seem to be cited because of any cognitive influence they might have had on studies; they rather were study objects. Most of the papers citing tweets are from the subject areas Social Sciences, Arts and Humanities, and Computer Sciences. Most of the papers cited only one tweet. Up to 55 tweets cited in a single paper were found. This research-in-progress does not support a high policy-relevance of tweets. However, a content analysis of the tweets and/or papers might lead to a more detailed conclusion.