Do you want to publish a course? Click here

The Effect of Use and Access on Citations

70   0   0.0 ( 0 )
 Added by Michael J. Kurtz
 Publication date 2005
and research's language is English




Ask ChatGPT about the research

It has been shown (S. Lawrence, 2001, Nature, 411, 521) that journal articles which have been posted without charge on the internet are more heavily cited than those which have not been. Using data from the NASA Astrophysics Data System (ads.harvard.edu) and from the ArXiv e-print archive at Cornell University (arXiv.org) we examine the causes of this effect.



rate research

Read More

With over 20 million records, the ADS citation database is regularly used by researchers and librarians to measure the scientific impact of individuals, groups, and institutions. In addition to the traditional sources of citations, the ADS has recently added references extracted from the arXiv e-prints on a nightly basis. We review the procedures used to harvest and identify the reference data used in the creation of citations, the policies and procedures that we follow to avoid double-counting and to eliminate contributions which may not be scholarly in nature. Finally, we describe how users and institutions can easily obtain quantitative citation data from the ADS, both interactively and via web-based programming tools. The ADS is available at http://ads.harvard.edu.
Academic papers have been the protagonists in disseminating expertise. Naturally, paper citation pattern analysis is an efficient and essential way of investigating the knowledge structure of science and technology. For decades, it has been observed that citation of scientific literature follows a heterogeneous and heavy-tailed distribution, and many of them suggest a power-law distribution, log-normal distribution, and related distributions. However, many studies are limited to small-scale approaches; therefore, it is hard to generalize. To overcome this problem, we investigate 21 years of citation evolution through a systematic analysis of the entire citation history of 42,423,644 scientific literatures published from 1996 to 2016 and contained in SCOPUS. We tested six candidate distributions for the scientific literature in three distinct levels of Scimago Journal & Country Rank (SJR) classification scheme. First, we observe that the raw number of annual citation acquisitions tends to follow the log-normal distribution for all disciplines, except for the first year of the publication. We also find significant disparity between the yearly acquired citation number among the journals, which suggests that it is essential to remove the citation surplus inherited from the prestige of the journals. Our simple method for separating the citation preference of an individual article from the inherited citation of the journals reveals an unexpected regularity in the normalized annual acquisitions of citations across the entire field of science. Specifically, the normalized annual citation acquisitions have power-law probability distributions with an exponential cut-off of the exponents around 2.3, regardless of its publication and citation year. Our results imply that journal reputation has a substantial long-term impact on the citation.
398 - Judit Bar-Ilan 2012
Traditionally, scholarly impact and visibility have been measured by counting publications and citations in the scholarly literature. However, increasingly scholars are also visible on the Web, establishing presences in a growing variety of social ecosystems. But how wide and established is this presence, and how do measures of social Web impact relate to their more traditional counterparts? To answer this, we sampled 57 presenters from the 2010 Leiden STI Conference, gathering publication and citations counts as well as data from the presenters Web footprints. We found Web presence widespread and diverse: 84% of scholars had homepages, 70% were on LinkedIn, 23% had public Google Scholar profiles, and 16% were on Twitter. For sampled scholars publications, social reference manager bookmarks were compared to Scopus and Web of Science citations; we found that Mendeley covers more than 80% of sampled articles, and that Mendeley bookmarks are significantly correlated (r=.45) to Scopus citation counts.
Citation prediction of scholarly papers is of great significance in guiding funding allocations, recruitment decisions, and rewards. However, little is known about how citation patterns evolve over time. By exploring the inherent involution property in scholarly paper citation, we introduce the Paper Potential Index (PPI) model based on four factors: inherent quality of scholarly paper, scholarly paper impact decaying over time, early citations, and early citers impact. In addition, by analyzing factors that drive citation growth, we propose a multi-feature model for impact prediction. Experimental results demonstrate that the two models improve the accuracy in predicting scholarly paper citations. Compared to the multi-feature model, the PPI model yields superior predictive performance in terms of range-normalized RMSE. The PPI model better interprets the changes in citation, without the need to adjust parameters. Compared to the PPI model, the multi-feature model performs better prediction in terms of Mean Absolute Percentage Error and Accuracy; however, their predictive performance is more dependent on the parameter adjustment.
Wikipedias contents are based on reliable and published sources. To this date, relatively little is known about what sources Wikipedia relies on, in part because extracting citations and identifying cited sources is challenging. To close this gap, we release Wikipedia Citations, a comprehensive dataset of citations extracted from Wikipedia. A total of 29.3M citations were extracted from 6.1M English Wikipedia articles as of May 2020, and classified as being to books, journal articles or Web contents. We were thus able to extract 4.0M citations to scholarly publications with known identifiers -- including DOI, PMC, PMID, and ISBN -- and further equip an extra 261K citations with DOIs from Crossref. As a result, we find that 6.7% of Wikipedia articles cite at least one journal article with an associated DOI, and that Wikipedia cites just 2% of all articles with a DOI currently indexed in the Web of Science. We release our code to allow the community to extend upon our work and update the dataset in the future.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا