Do you want to publish a course? Click here

An evaluation of Bradfordizing effects

78   0   0.0 ( 0 )
 Added by Philipp Mayr
 Publication date 2008
and research's language is English
 Authors Philipp Mayr




Ask ChatGPT about the research

The purpose of this paper is to apply and evaluate the bibliometric method Bradfordizing for information retrieval (IR) experiments. Bradfordizing is used for generating core document sets for subject-specific questions and to reorder result sets from distributed searches. The method will be applied and tested in a controlled scenario of scientific literature databases from social and political sciences, economics, psychology and medical science (SOLIS, SoLit, USB Koeln Opac, CSA Sociological Abstracts, World Affairs Online, Psyndex and Medline) and 164 standardized topics. An evaluation of the method and its effects is carried out in two laboratory-based information retrieval experiments (CLEF and KoMoHe) using a controlled document corpus and human relevance assessments. The results show that Bradfordizing is a very robust method for re-ranking the main document types (journal articles and monographs) in todays digital libraries (DL). The IR tests show that relevance distributions after re-ranking improve at a significant level if articles in the core are compared with articles in the succeeding zones. The items in the core are significantly more often assessed as relevant, than items in zone 2 (z2) or zone 3 (z3). The improvements between the zones are statistically significant based on the Wilcoxon signed-rank test and the paired T-Test.



rate research

Read More

Cultural-scale models of full text documents are prone to over-interpretation by researchers making unintentionally strong socio-linguistic claims (Pechenick et al., 2015) without recognizing that even large digital libraries are merely samples of all the books ever produced. In this study, we test the sensitivity of the topic models to the sampling process by taking random samples of books in the Hathi Trust Digital Library from different areas of the Library of Congress Classification Outline. For each classification area, we train several topic models over the entire class with different random seeds, generating a set of spanning models. Then, we train topic models on random samples of books from the classification area, generating a set of sample models. Finally, we perform a topic alignment between each pair of models by computing the Jensen-Shannon distance (JSD) between the word probability distributions for each topic. We take two measures on each model alignment: alignment distance and topic overlap. We find that sample models with a large sample size typically have an alignment distance that falls in the range of the alignment distance between spanning models. Unsurprisingly, as sample size increases, alignment distance decreases. We also find that the topic overlap increases as sample size increases. However, the decomposition of these measures by sample size differs by number of topics and by classification area. We speculate that these measures could be used to find classes which have a common canon discussed among all books in the area, as shown by high topic overlap and low alignment distance even in small sample sizes.
Quantifying the impact of scientific papers objectively is crucial for research output assessment, which subsequently affects institution and country rankings, research funding allocations, academic recruitment and national/international scientific priorities. While most of the assessment schemes based on publication citations may potentially be manipulated through negative citations, in this study, we explore Conflict of Interest (COI) relationships and discover negative citations and subsequently weaken the associated citation strength. PANDORA (Positive And Negative COI- Distinguished Objective Rank Algorithm) has been developed, which captures the positive and negative COI, together with the positive and negative suspected COI relationships. In order to alleviate the influence caused by negative COI relationship, collaboration times, collaboration time span, citation times and citation time span are employed to determine the citing strength; while for positive COI relationship, we regard it as normal citation relationship. Furthermore, we calculate the impact of scholarly papers by PageRank and HITS algorithms, based on a credit allocation algorithm which is utilized to assess the impact of institutions fairly and objectively. Experiments are conducted on the publication dataset from American Physical Society (APS) dataset, and the results demonstrate that our method significantly outperforms the current solutions in Recommendation Intensity of list R at top-K and Spearmans rank correlation coefficient at top-K.
CAS Journal Ranking, a ranking system of journals based on the bibliometric indicator of citation impact, has been widely used in meso and macro-scale research evaluation in China since its first release in 2004. The rankings coverage is journals which contained in the Clarivates Journal Citation Reports (JCR). This paper will mainly introduce the upgraded version of the 2019 CAS journal ranking. Aiming at limitations around the indicator and classification system utilized in earlier editions, also the problem of journals interdisciplinarity or multidisciplinarity, we will discuss the improvements in the 2019 upgraded version of CAS journal ranking (1) the CWTS paper-level classification system, a more fine-grained system, has been utilized, (2) a new indicator, Field Normalized Citation Success Index (FNCSI), which ia robust against not only extremely highly cited publications, but also the wrongly assigned document type, has been used, and (3) the calculation of the indicator is from a paper-level. In addition, this paper will present a small part of ranking results and an interpretation of the robustness of the new FNCSI indicator. By exploring more sophisticated methods and indicators, like the CWTS paper-level classification system and the new FNCSI indicator, CAS Journal Ranking will continue its original purpose for responsible research evaluation.
Author Name Disambiguation (AND) is the task of resolving which author mentions in a bibliographic database refer to the same real-world person, and is a critical ingredient of digital library applications such as search and citation analysis. While many AND algorithms have been proposed, comparing them is difficult because they often employ distinct features and are evaluated on different datasets. In response to this challenge, we present S2AND, a unified benchmark dataset for AND on scholarly papers, as well as an open-source reference model implementation. Our dataset harmonizes eight disparate AND datasets into a uniform format, with a single rich feature set drawn from the Semantic Scholar (S2) database. Our evaluation suite for S2AND reports performance split by facets like publication year and number of papers, allowing researchers to track both global performance and measures of fairness across facet values. Our experiments show that because previous datasets tend to cover idiosyncratic and biased slices of the literature, algorithms trained to perform well on one on them may generalize poorly to others. By contrast, we show how training on a union of datasets in S2AND results in more robust models that perform well even on datasets unseen in training. The resulting AND model also substantially improves over the production algorithm in S2, reducing error by over 50% in terms of $B^3$ F1. We release our unified dataset, model code, trained models, and evaluation suite to the research community. https://github.com/allenai/S2AND/
The surge in the number of books published makes the manual evaluation methods difficult to efficiently evaluate books. The use of books citations and alternative evaluation metrics can assist manual evaluation and reduce the cost of evaluation. However, most existing evaluation research was based on a single evaluation source with coarse-grained analysis, which may obtain incomprehensive or one-sided evaluation results of book impact. Meanwhile, relying on a single resource for book assessment may lead to the risk that the evaluation results cannot be obtained due to the lack of the evaluation data, especially for newly published books. Hence, this paper measured book impact based on an evaluation system constructed by integrating multiple evaluation sources. Specifically, we conducted finer-grained mining on the multiple evaluation sources, including books internal evaluation resources and external evaluation resources. Various technologies (e.g. topic extraction, sentiment analysis, text classification) were used to extract corresponding evaluation metrics from the internal and external evaluation resources. Then, Expert evaluation combined with analytic hierarchy process was used to integrate the evaluation metrics and construct a book impact evaluation system. Finally, the reliability of the evaluation system was verified by comparing with the results of expert evaluation, detailed and diversified evaluation results were then obtained. The experimental results reveal that differential evaluation resources can measure the books impacts from different dimensions, and the integration of multiple evaluation data can assess books more comprehensively. Meanwhile, the book impact evaluation system can provide personalized evaluation results according to the users evaluation purposes. In addition, the disciplinary differences should be considered for assessing books impacts.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا