Do you want to publish a course? Click here

Peer Reviewing Revisited: Assessing Research with Interlinked Semantic Comments

71   0   0.0 ( 0 )
 Added by Tobias Kuhn
 Publication date 2019
and research's language is English




Ask ChatGPT about the research

Scientific publishing seems to be at a turning point. Its paradigm has stayed basically the same for 300 years but is now challenged by the increasing volume of articles that makes it very hard for scientists to stay up to date in their respective fields. In fact, many have pointed out serious flaws of current scientific publishing practices, including the lack of accuracy and efficiency of the reviewing process. To address some of these problems, we apply here the general principles of the Web and the Semantic Web to scientific publishing, focusing on the reviewing process. We want to determine if a fine-grained model of the scientific publishing workflow can help us make the reviewing processes better organized and more accurate, by ensuring that review comments are created with formal links and semantics from the start. Our contributions include a novel model called Linkflows that allows for such detailed and semantically rich representations of reviews and the reviewing processes. We evaluate our approach on a manually curated dataset from several recent Computer Science journals and conferences that come with open peer reviews. We gathered ground-truth data by contacting the original reviewers and asking them to categorize their own review comments according to our model. Comparing this ground truth to answers provided by model experts, peers, and automated techniques confirms that our approach of formally capturing the reviewers intentions from the start prevents substantial discrepancies compared to when this information is later extracted from the plain-text comments. In general, our analysis shows that our model is well understood and easy to apply, and it revealed the semantic properties of such review comments.

rate research

Read More

The vast and growing number of publications in all disciplines of science cannot be comprehended by a single human researcher. As a consequence, researchers have to specialize in narrow sub-disciplines, which makes it challenging to uncover scientific connections beyond the own field of research. Thus access to structured knowledge from a large corpus of publications could help pushing the frontiers of science. Here we demonstrate a method to build a semantic network from published scientific literature, which we call SemNet. We use SemNet to predict future trends in research and to inspire new, personalized and surprising seeds of ideas in science. We apply it in the discipline of quantum physics, which has seen an unprecedented growth of activity in recent years. In SemNet, scientific knowledge is represented as an evolving network using the content of 750,000 scientific papers published since 1919. The nodes of the network correspond to physical concepts, and links between two nodes are drawn when two physical concepts are concurrently studied in research articles. We identify influential and prize-winning research topics from the past inside SemNet thus confirm that it stores useful semantic knowledge. We train a deep neural network using states of SemNet of the past, to predict future developments in quantum physics research, and confirm high quality predictions using historic data. With the neural network and theoretical network tools we are able to suggest new, personalized, out-of-the-box ideas, by identifying pairs of concepts which have unique and extremal semantic network properties. Finally, we consider possible future developments and implications of our findings.
We revisit our recent study [Predicting results of the Research Excellence Framework using departmental h-index, Scientometrics, 2014, 1-16; arXiv:1411.1996] in which we attempted to predict outcomes of the UKs Research Excellence Framework (REF~2014) using the so-called departmental $h$-index. Here we report that our predictions failed to anticipate with any accuracy either overall REF outcomes or movements of individual institutions in the rankings relative to their positions in the previous Research Assessment Exercise (RAE~2008).
Robotic-assisted surgery is now well-established in clinical practice and has become the gold standard clinical treatment option for several clinical indications. The field of robotic-assisted surgery is expected to grow substantially in the next decade with a range of new robotic devices emerging to address unmet clinical needs across different specialities. A vibrant surgical robotics research community is pivotal for conceptualizing such new systems as well as for developing and training the engineers and scientists to translate them into practice. The da Vinci Research Kit (dVRK), an academic and industry collaborative effort to re-purpose decommissioned da Vinci surgical systems (Intuitive Surgical Inc, CA, USA) as a research platform for surgical robotics research, has been a key initiative for addressing a barrier to entry for new research groups in surgical robotics. In this paper, we present an extensive review of the publications that have been facilitated by the dVRK over the past decade. We classify research efforts into different categories and outline some of the major challenges and needs for the robotics community to maintain this initiative and build upon it.
Attendance rate is an important indicator of students study motivation, behavior and Psychological status; However, the heterogeneous nature of student attendance rates due to the course registration difference or the online/offline difference in a blended learning environment makes it challenging to compare attendance rates. In this paper, we propose a novel method called Relative Attendance Index (RAI) to measure attendance rates, which reflects students efforts on attending courses. While traditional attendance focuses on the record of a single person or course, relative attendance emphasizes peer attendance information of relevant individuals or courses, making the comparisons of attendance more justified. Experimental results on real-life data show that RAI can indeed better reflect student engagement.
An increasing number of researchers support reproducibility by including pointers to and descriptions of datasets, software and methods in their publications. However, scientific articles may be ambiguous, incomplete and difficult to process by automated systems. In this paper we introduce RO-Crate, an open, community-driven, and lightweight approach to packaging research artefacts along with their metadata in a machine readable manner. RO-Crate is based on Schema$.$org annotations in JSON-LD, aiming to establish best practices to formally describe metadata in an accessible and practical way for their use in a wide variety of situations. An RO-Crate is a structured archive of all the items that contributed to a research outcome, including their identifiers, provenance, relations and annotations. As a general purpose packaging approach for data and their metadata, RO-Crate is used across multiple areas, including bioinformatics, digital humanities and regulatory sciences. By applying just enough Linked Data standards, RO-Crate simplifies the process of making research outputs FAIR while also enhancing research reproducibility. An RO-Crate for this article is available at https://www.researchobject.org/2021-packaging-research-artefacts-with-ro-crate/
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا