No Arabic abstract
We investigated the feasibility of crowdsourcing full-fledged tutorial videos from ordinary people on the Web on how to solve math problems related to logarithms. This kind of approach (a form of learnersourcing) to efficiently collecting tutorial videos and other learning resources could be useful for realizing personalized learning-at-scale, whereby students receive specific learning resources -- drawn from a large and diverse set -- that are tailored to their individual and time-varying needs. Results of our study, in which we collected 399 videos from 66 unique teachers on Mechanical Turk, suggest that (1) approximately 100 videos -- over $80%$ of which are mathematically fully correct -- can be crowdsourced per week for $5/video; (2) the crowdsourced videos exhibit significant diversity in terms of language style, presentation media, and pedagogical approach; (3) the average learning gains (posttest minus pretest score) associated with watching the videos was stat.~sig.~higher than for a control video ($0.105$ versus $0.045$); and (4) the average learning gains ($0.1416$) from watching the best tested crowdsourced videos was comparable to the learning gains ($0.1506$) from watching a popular Khan Academy video on logarithms.
In this paper we propose applying the crowdsourcing approach to a software platform that uses a modern and state-of-the-art 3D game engine. This platform could facilitate the generation and manipulation of interactive 3D environments by a community of users producing different content such as cultural heritage, scientific virtual labs, games, novel art forms and virtual museums.
Modern machine learning is migrating to the era of complex models, which requires a plethora of well-annotated data. While crowdsourcing is a promising tool to achieve this goal, existing crowdsourcing approaches barely acquire a sufficient amount of high-quality labels. In this paper, motivated by the Guess-with-Hints answer strategy from the Millionaire game show, we introduce the hint-guided approach into crowdsourcing to deal with this challenge. Our approach encourages workers to get help from hints when they are unsure of questions. Specifically, we propose a hybrid-stage setting, consisting of the main stage and the hint stage. When workers face any uncertain question on the main stage, they are allowed to enter the hint stage and look up hints before making any answer. A unique payment mechanism that meets two important design principles for crowdsourcing is developed. Besides, the proposed mechanism further encourages high-quality workers less using hints, which helps identify and assigns larger possible payment to them. Experiments are performed on Amazon Mechanical Turk, which show that our approach ensures a sufficient number of high-quality labels with low expenditure and detects high-quality workers.
The main goal of this paper is to discuss how to integrate the possibilities of crowdsourcing platforms with systems supporting workflow to enable the engagement and interaction with business tasks of a wider group of people. Thus, this work is an attempt to expand the functional capabilities of typical business systems by allowing selected process tasks to be performed by unlimited human resources. Opening business tasks to crowdsourcing, within established Business Process Management Systems (BPMS) will improve the flexibility of company processes and allow for lower work-load and greater specialization among the staff employed on-site. The presented conceptual work is based on the current international standards in this field, promoted by Workflows Management Coalition. To this end, the functioning of business platforms was analysed and their functionality was presented visually, followed by a proposal and a discussion of how to implement crowdsourcing into workflow systems.
We consider the problem of learning user preferences over robot trajectories for environments rich in objects and humans. This is challenging because the criterion defining a good trajectory varies with users, tasks and interactions in the environment. We represent trajectory preferences using a cost function that the robot learns and uses it to generate good trajectories in new environments. We design a crowdsourcing system - PlanIt, where non-expert users label segments of the robots trajectory. PlanIt allows us to collect a large amount of user feedback, and using the weak and noisy labels from PlanIt we learn the parameters of our model. We test our approach on 122 different environments for robotic navigation and manipulation tasks. Our extensive experiments show that the learned cost function generates preferred trajectories in human environments. Our crowdsourcing system is publicly available for the visualization of the learned costs and for providing preference feedback: url{http://planit.cs.cornell.edu}
In this paper we report the results of a pilot study comparing the older and younger adults interaction with an Android TV application which enables users to detect errors in video subtitles. Overall, the interaction with the TV-mediated crowdsourcing system relying on language profficiency was seen as intuitive, fun and accessible, but also cognitively demanding; more so for younger adults who focused on the task of detecting errors, than for older adults who concentrated more on the meaning and edutainment aspect of the videos. We also discuss participants motivations and preliminary recommendations for the design of TV-enabled crowdsourcing tasks and subtitle QA systems.