Do you want to publish a course? Click here

Creation and use of Citations in the ADS

64   0   0.0 ( 0 )
 Added by Alberto Accomazzi
 Publication date 2006
and research's language is English




Ask ChatGPT about the research

With over 20 million records, the ADS citation database is regularly used by researchers and librarians to measure the scientific impact of individuals, groups, and institutions. In addition to the traditional sources of citations, the ADS has recently added references extracted from the arXiv e-prints on a nightly basis. We review the procedures used to harvest and identify the reference data used in the creation of citations, the policies and procedures that we follow to avoid double-counting and to eliminate contributions which may not be scholarly in nature. Finally, we describe how users and institutions can easily obtain quantitative citation data from the ADS, both interactively and via web-based programming tools. The ADS is available at http://ads.harvard.edu.

rate research

Read More

It has been shown (S. Lawrence, 2001, Nature, 411, 521) that journal articles which have been posted without charge on the internet are more heavily cited than those which have not been. Using data from the NASA Astrophysics Data System (ads.harvard.edu) and from the ArXiv e-print archive at Cornell University (arXiv.org) we examine the causes of this effect.
Quantifying the impact of scientific papers objectively is crucial for research output assessment, which subsequently affects institution and country rankings, research funding allocations, academic recruitment and national/international scientific priorities. While most of the assessment schemes based on publication citations may potentially be manipulated through negative citations, in this study, we explore Conflict of Interest (COI) relationships and discover negative citations and subsequently weaken the associated citation strength. PANDORA (Positive And Negative COI- Distinguished Objective Rank Algorithm) has been developed, which captures the positive and negative COI, together with the positive and negative suspected COI relationships. In order to alleviate the influence caused by negative COI relationship, collaboration times, collaboration time span, citation times and citation time span are employed to determine the citing strength; while for positive COI relationship, we regard it as normal citation relationship. Furthermore, we calculate the impact of scholarly papers by PageRank and HITS algorithms, based on a credit allocation algorithm which is utilized to assess the impact of institutions fairly and objectively. Experiments are conducted on the publication dataset from American Physical Society (APS) dataset, and the results demonstrate that our method significantly outperforms the current solutions in Recommendation Intensity of list R at top-K and Spearmans rank correlation coefficient at top-K.
Citation prediction of scholarly papers is of great significance in guiding funding allocations, recruitment decisions, and rewards. However, little is known about how citation patterns evolve over time. By exploring the inherent involution property in scholarly paper citation, we introduce the Paper Potential Index (PPI) model based on four factors: inherent quality of scholarly paper, scholarly paper impact decaying over time, early citations, and early citers impact. In addition, by analyzing factors that drive citation growth, we propose a multi-feature model for impact prediction. Experimental results demonstrate that the two models improve the accuracy in predicting scholarly paper citations. Compared to the multi-feature model, the PPI model yields superior predictive performance in terms of range-normalized RMSE. The PPI model better interprets the changes in citation, without the need to adjust parameters. Compared to the PPI model, the multi-feature model performs better prediction in terms of Mean Absolute Percentage Error and Accuracy; however, their predictive performance is more dependent on the parameter adjustment.
Academic papers have been the protagonists in disseminating expertise. Naturally, paper citation pattern analysis is an efficient and essential way of investigating the knowledge structure of science and technology. For decades, it has been observed that citation of scientific literature follows a heterogeneous and heavy-tailed distribution, and many of them suggest a power-law distribution, log-normal distribution, and related distributions. However, many studies are limited to small-scale approaches; therefore, it is hard to generalize. To overcome this problem, we investigate 21 years of citation evolution through a systematic analysis of the entire citation history of 42,423,644 scientific literatures published from 1996 to 2016 and contained in SCOPUS. We tested six candidate distributions for the scientific literature in three distinct levels of Scimago Journal & Country Rank (SJR) classification scheme. First, we observe that the raw number of annual citation acquisitions tends to follow the log-normal distribution for all disciplines, except for the first year of the publication. We also find significant disparity between the yearly acquired citation number among the journals, which suggests that it is essential to remove the citation surplus inherited from the prestige of the journals. Our simple method for separating the citation preference of an individual article from the inherited citation of the journals reveals an unexpected regularity in the normalized annual acquisitions of citations across the entire field of science. Specifically, the normalized annual citation acquisitions have power-law probability distributions with an exponential cut-off of the exponents around 2.3, regardless of its publication and citation year. Our results imply that journal reputation has a substantial long-term impact on the citation.
Wikipedias contents are based on reliable and published sources. To this date, relatively little is known about what sources Wikipedia relies on, in part because extracting citations and identifying cited sources is challenging. To close this gap, we release Wikipedia Citations, a comprehensive dataset of citations extracted from Wikipedia. A total of 29.3M citations were extracted from 6.1M English Wikipedia articles as of May 2020, and classified as being to books, journal articles or Web contents. We were thus able to extract 4.0M citations to scholarly publications with known identifiers -- including DOI, PMC, PMID, and ISBN -- and further equip an extra 261K citations with DOIs from Crossref. As a result, we find that 6.7% of Wikipedia articles cite at least one journal article with an associated DOI, and that Wikipedia cites just 2% of all articles with a DOI currently indexed in the Web of Science. We release our code to allow the community to extend upon our work and update the dataset in the future.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا