ترغب بنشر مسار تعليمي؟ اضغط هنا

On Optimizing Human-Machine Task Assignments

459   0   0.0 ( 0 )
 نشر من قبل Andreas Veit
 تاريخ النشر 2015
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

When crowdsourcing systems are used in combination with machine inference systems in the real world, they benefit the most when the machine system is deeply integrated with the crowd workers. However, if researchers wish to integrate the crowd with off-the-shelf machine classifiers, this deep integration is not always possible. This work explores two strategies to increase accuracy and decrease cost under this setting. First, we show that reordering tasks presented to the human can create a significant accuracy improvement. Further, we show that greedily choosing parameters to maximize machine accuracy is sub-optimal, and joint optimization of the combined system improves performance.



قيم البحث

اقرأ أيضاً

Over the past decade, Citizen Science has become a proven method of distributed data analysis, enabling research teams from diverse domains to solve problems involving large quantities of data with complexity levels which require human pattern recogn ition capabilities. With over 120 projects built reaching nearly 1.7 million volunteers, the Zooniverse.org platform has led the way in the application of Citizen Science as a method for closing the Big Data analysis gap. Since the launch in 2007 of the Galaxy Zoo project, the Zooniverse platform has enabled significant contributions across many disciplines; e.g., in ecology, humanities, and astronomy. Citizen science as an approach to Big Data combines the twin advantages of the ability to scale analysis to the size of modern datasets with the ability of humans to make serendipitous discoveries. To cope with the larger datasets looming on the horizon such as astronomys Large Synoptic Survey Telescope (LSST) or the 100s of TB from ecology projects annually, Zooniverse has been researching a system design that is optimized for efficiency in task assignment and incorporating human and machine classifiers into the classification engine. By making efficient use of smart task assignment and the combination of human and machine classifiers, we can achieve greater accuracy and flexibility than has been possible to date. We note that creating the most efficient system must consider how best to engage and retain volunteers as well as make the most efficient use of their classifications. Our work thus focuses on understanding the factors that optimize efficiency of the combined human-machine system. This paper summarizes some of our research to date on integration of machine learning with Zooniverse, while also describing new infrastructure developed on the Zooniverse platform to carry out this research.
We propose incorporating human labelers in a model fine-tuning system that provides immediate user feedback. In our framework, human labelers can interactively query model predictions on unlabeled data, choose which data to label, and see the resulti ng effect on the models predictions. This bi-directional feedback loop allows humans to learn how the model responds to new data. Our hypothesis is that this rich feedback allows human labelers to create mental models that enable them to better choose which biases to introduce to the model. We compare human-selected points to points selected using standard active learning methods. We further investigate how the fine-tuning methodology impacts the human labelers performance. We implement this framework for fine-tuning high-resolution land cover segmentation models. Specifically, we fine-tune a deep neural network -- trained to segment high-resolution aerial imagery into different land cover classes in Maryland, USA -- to a new spatial area in New York, USA. The tight loop turns the algorithm and the human operator into a hybrid system that can produce land cover maps of a large area much more efficiently than the traditional workflows. Our framework has applications in geospatial machine learning settings where there is a practically limitless supply of unlabeled data, of which only a small fraction can feasibly be labeled through human efforts.
Pulmonary hemorrhage (P-Hem) occurs among multiple species and can have various causes. Cytology of bronchoalveolarlavage fluid (BALF) using a 5-tier scoring system of alveolar macrophages based on their hemosiderin content is considered the most sen sitive diagnostic method. We introduce a novel, fully annotated multi-species P-Hem dataset which consists of 74 cytology whole slide images (WSIs) with equine, feline and human samples. To create this high-quality and high-quantity dataset, we developed an annotation pipeline combining human expertise with deep learning and data visualisation techniques. We applied a deep learning-based object detection approach trained on 17 expertly annotated equine WSIs, to the remaining 39 equine, 12 human and 7 feline WSIs. The resulting annotations were semi-automatically screened for errors on multiple types of specialised annotation maps and finally reviewed by a trained pathologists. Our dataset contains a total of 297,383 hemosiderophages classified into five grades. It is one of the largest publicly availableWSIs datasets with respect to the number of annotations, the scanned area and the number of species covered.
We argue that a key challenge in enabling usable and useful interactive task learning for intelligent agents is to facilitate effective Human-AI collaboration. We reflect on our past 5 years of efforts on designing, developing and studying the SUGILI TE system, discuss the issues on incorporating recent advances in AI with HCI principles in mixed-initiative interactions and multi-modal interactions, and summarize the lessons we learned. Lastly, we identify several challenges and opportunities, and describe our ongoing work
Surface electromyography (sEMG) is a non-invasive method of measuring neuromuscular potentials generated when the brain instructs the body to perform both fine and coarse locomotion. This technique has seen extensive investigation over the last two d ecades, with significant advances in both the hardware and signal processing methods used to collect and analyze sEMG signals. While early work focused mainly on medical applications, there has been growing interest in utilizing sEMG as a sensing modality to enable next-generation, high-bandwidth, and natural human-machine interfaces. In the first part of this review, we briefly overview the human skeletomuscular physiology that gives rise to sEMG signals followed by a review of developments in sEMG acquisition hardware. Special attention is paid towards the fidelity of these devices as well as form factor, as recent advances have pushed the limits of user comfort and high-bandwidth acquisition. In the second half of the article, we explore work quantifying the information content of natural human gestures and then review the various signal processing and machine learning methods developed to extract information in sEMG signals. Finally, we discuss the future outlook in this field, highlighting the key gaps in current methods to enable seamless natural interactions between humans and machines.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا