ترغب بنشر مسار تعليمي؟ اضغط هنا

The Open Annotation Collaboration (OAC) Model

124   0   0.0 ( 0 )
 نشر من قبل Bernhard Haslhofer
 تاريخ النشر 2011
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Annotations allow users to associate additional information with existing resources. Using proprietary and closed systems on the Web, users are already able to annotate multimedia resources such as images, audio and video. So far, however, this information is almost always kept locked up and inaccessible to the Web of Data. We believe that an important step to take is the integration of multimedia annotations and the Linked Data principles. This should allow clients to easily publish and consume, thus exchange annotations about resources via common Web standards. We first present the current status of the Open Annotation Collaboration, an international initiative that is currently working on annotation interoperability specifications based on best practices from the Linked Data effort. Then we present two use cases and early prototypes that make use of the proposed annotation model and present lessons learned and discuss yet open technical issues.

قيم البحث

اقرأ أيضاً

Inspired by the social and economic benefits of diversity, we analyze over 9 million papers and 6 million scientists to study the relationship between research impact and five classes of diversity: ethnicity, discipline, gender, affiliation, and acad emic age. Using randomized baseline models, we establish the presence of homophily in ethnicity, gender and affiliation. We then study the effect of diversity on scientific impact, as reflected in citations. Remarkably, of the classes considered, ethnic diversity had the strongest correlation with scientific impact. To further isolate the effects of ethnic diversity, we used randomized baseline models and again found a clear link between diversity and impact. To further support these findings, we use coarsened exact matching to compare the scientific impact of ethnically diverse papers and scientists with closely-matched control groups. Here, we find that ethnic diversity resulted in an impact gain of 10.63% for papers, and 47.67% for scientists.
We introduce Fluid Annotation, an intuitive human-machine collaboration interface for annotating the class label and outline of every object and background region in an image. Fluid annotation is based on three principles: (I) Strong Machine-Learning aid. We start from the output of a strong neural network model, which the annotator can edit by correcting the labels of existing regions, adding new regions to cover missing objects, and removing incorrect regions. The edit operations are also assisted by the model. (II) Full image annotation in a single pass. As opposed to performing a series of small annotation tasks in isolation, we propose a unified interface for full image annotation in a single pass. (III) Empower the annotator. We empower the annotator to choose what to annotate and in which order. This enables concentrating on what the machine does not already know, i.e. putting human effort only on the errors it made. This helps using the annotation budget effectively. Through extensive experiments on the COCO+Stuff dataset, we demonstrate that Fluid Annotation leads to accurate annotations very efficiently, taking three times less annotation time than the popular LabelMe interface.
Throughout history, a relatively small number of individuals have made a profound and lasting impact on science and society. Despite long-standing, multi-disciplinary interests in understanding careers of elite scientists, there have been limited att empts for a quantitative, career-level analysis. Here, we leverage a comprehensive dataset we assembled, allowing us to trace the entire career histories of nearly all Nobel laureates in physics, chemistry, and physiology or medicine over the past century. We find that, although Nobel laureates were energetic producers from the outset, producing works that garner unusually high impact, their careers before winning the prize follow relatively similar patterns as ordinary scientists, being characterized by hot streaks and increasing reliance on collaborations. We also uncovered notable variations along their careers, often associated with the Nobel prize, including shifting coauthorship structure in the prize-winning work, and a significant but temporary dip in the impact of work they produce after winning the Nobel. Together, these results document quantitative patterns governing the careers of scientific elites, offering an empirical basis for a deeper understanding of the hallmarks of exceptional careers in science.
Responsible indicators are crucial for research assessment and monitoring. Transparency and accuracy of indicators are required to make research assessment fair and ensure reproducibility. However, sometimes it is difficult to conduct or replicate st udies based on indicators due to the lack of transparency in conceptualization and operationalization. In this paper, we review the different variants of the Probabilistic Affinity Index (PAI), considering both the conceptual and empirical underpinnings. We begin with a review of the historical development of the indicator and the different alternatives proposed. To demonstrate the utility of the indicator, we demonstrate the application of PAI to identifying preferred partners in scientific collaboration. A streamlined procedure is provided, to demonstrate the variations and appropriate calculations. We then compare the results of implementation for five specific countries involved in international scientific collaboration. Despite the different proposals on its calculation, we do not observe large differences between the PAI variants, particularly with respect to country size. As with any indicator, the selection of a particular variant is dependent on the research question. To facilitate appropriate use, we provide recommendations for the use of the indicator given specific contexts.
Scientific collaboration is often not perfectly reciprocal. Scientifically strong countries/institutions/laboratories may help their less prominent partners with leading scholars, or finance, or other resources. What is interesting in such type of co llaboration is that (1) it may be measured by bibliometrics and (2) it may shed more light on the scholarly level of both collaborating organizations themselves. In this sense measuring institutions in collaboration sometimes may tell more than attempts to assess them as stand-alone organizations. Evaluation of collaborative patterns was explained in detail, for example, by Glanzel (2001; 2003). Here we combine these methods with a new one, made available by separating the best journals from others on the same platform of Russian Index of Science Citation (RISC). Such sub-universes of journals from different leagues provide additional methods to study how collaboration influences the quality of papers published by organizations.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا