Do you want to publish a course? Click here

Qualitative Judgement of Research Impact: Domain Taxonomy as a Fundamental Framework for Judgement of the Quality of Research

115   0   0.0 ( 0 )
 Added by Fionn Murtagh
 Publication date 2016
and research's language is English




Ask ChatGPT about the research

The appeal of metric evaluation of research impact has attracted considerable interest in recent times. Although the public at large and administrative bodies are much interested in the idea, scientists and other researchers are much more cautious, insisting that metrics are but an auxiliary instrument to the qualitative peer-based judgement. The goal of this article is to propose availing of such a well positioned construct as domain taxonomy as a tool for directly assessing the scope and quality of research. We first show how taxonomies can be used to analyse the scope and perspectives of a set of research projects or papers. Then we proceed to define a research team or researchers rank by those nodes in the hierarchy that have been created or significantly transformed by the results of the researcher. An experimental test of the approach in the data analysis domain is described. Although the concept of taxonomy seems rather simplistic to describe all the richness of a research domain, its changes and use can be made transparent and subject to open discussions.



rate research

Read More

The Open Research Knowledge Graph (ORKG) provides machine-actionable access to scholarly literature that habitually is written in prose. Following the FAIR principles, the ORKG makes traditional, human-coded knowledge findable, accessible, interoperable, and reusable in a structured manner in accordance with the Linked Open Data paradigm. At the moment, in ORKG papers are described manually, but in the long run the semantic depth of the literature at scale needs automation. Operational Research is a suitable test case for this vision because the mathematical field and, hence, its publication habits are highly structured: A mundane problem is formulated as a mathematical model, solved or approximated numerically, and evaluated systematically. We study the existing literature with respect to the Assembly Line Balancing Problem and derive a semantic description in accordance with the ORKG. Eventually, selected papers are ingested to test the semantic description and refine it further.
Understanding emerging areas of a multidisciplinary research field is crucial for researchers,policymakers and other stakeholders. For them a knowledge structure based on longitudinal bibliographic data can be an effective instrument. But with the vast amount of available online information it is often hard to understand the knowledge structure for data. In this paper, we present a novel approach for retrieving online bibliographic data and propose a framework for exploring knowledge structure. We also present several longitudinal analyses to interpret and visualize the last 20 years of published obesity research data.
197 - Nicholas Botzer , Shawn Gu , 2021
Moral outrage has become synonymous with social media in recent years. However, the preponderance of academic analysis on social media websites has focused on hate speech and misinformation. This paper focuses on analyzing moral judgements rendered on social media by capturing the moral judgements that are passed in the subreddit /r/AmITheAsshole on Reddit. Using the labels associated with each judgement we train a classifier that can take a comment and determine whether it judges the user who made the original post to have positive or negative moral valence. Then, we use this classifier to investigate an assortment of website traits surrounding moral judgements in ten other subreddits, including where negative moral users like to post and their posting patterns. Our findings also indicate that posts that are judged in a positive manner will score higher.
Researchers are often evaluated by citation-based metrics. Such metrics can inform hiring, promotion, and funding decisions. Concerns have been expressed that popular citation-based metrics incentivize researchers to maximize the production of publications. Such incentives may not be optimal for scientific progress. Here we present a citation-based measure that rewards both productivity and taste: the researchers ability to focus on impactful contributions. The presented measure, CAP, balances the impact of publications and their quantity, thus incentivizing researchers to consider whether a publication is a useful addition to the literature. CAP is simple, interpretable, and parameter-free. We analyze the characteristics of CAP for highly-cited researchers in biology, computer science, economics, and physics, using a corpus of millions of publications and hundreds of millions of citations with yearly temporal granularity. CAP produces qualitatively plausible outcomes and has a number of advantages over prior metrics. Results can be explored at https://cap-measure.org/
We revisit our recent study [Predicting results of the Research Excellence Framework using departmental h-index, Scientometrics, 2014, 1-16; arXiv:1411.1996] in which we attempted to predict outcomes of the UKs Research Excellence Framework (REF~2014) using the so-called departmental $h$-index. Here we report that our predictions failed to anticipate with any accuracy either overall REF outcomes or movements of individual institutions in the rankings relative to their positions in the previous Research Assessment Exercise (RAE~2008).
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا