Do you want to publish a course? Click here

The Hidden Cost of Using Amazon Mechanical Turk for Research

85   0   0.0 ( 0 )
 Added by Antonios Saravanos
 Publication date 2021
and research's language is English




Ask ChatGPT about the research

In this study, we investigate the attentiveness exhibited by participants sourced through Amazon Mechanical Turk (MTurk), thereby discovering a significant level of inattentiveness amongst the platforms top crowd workers (those classified as Master, with an Approval Rate of 98% or more, and a Number of HITS approved value of 1,000 or more). A total of 564 individuals from the United States participated in our experiment. They were asked to read a vignette outlining one of four hypothetical technology products and then complete a related survey. Three forms of attention check (logic, honesty, and time) were used to assess attentiveness. Through this experiment we determined that a total of 126 (22.3%) participants failed at least one of the three forms of attention check, with most (94) failing the honesty check - followed by the logic check (31), and the time check (27). Thus, we established that significant levels of inattentiveness exist even among the most elite MTurk workers. The study concludes by reaffirming the need for multiple forms of carefully crafted attention checks, irrespective of whether participant quality is presumed to be high according to MTurk criteria such as Master, Approval Rate, and Number of HITS approved. Furthermore, we propose that researchers adjust their proposals to account for the effort and costs required to address participant inattentiveness.



rate research

Read More

Readability is on the cusp of a revolution. Fixed text is becoming fluid as a proliferation of digital reading devices rewrite what a document can do. As past constraints make way for more flexible opportunities, there is great need to understand how reading formats can be tuned to the situation and the individual. We aim to provide a firm foundation for readability research, a comprehensive framework for modern, multi-disciplinary readability research. Readability refers to aspects of visual information design which impact information flow from the page to the reader. Readability can be enhanced by changes to the set of typographical characteristics of a text. These aspects can be modified on-demand, instantly improving the ease with which a reader can process and derive meaning from text. We call on a multi-disciplinary research community to take up these challenges to elevate reading outcomes and provide the tools to do so effectively.
A significant problem with immersive virtual reality (IVR) experiments is the ability to compare research conditions. VR kits and IVR environments are complex and diverse but researchers from different fields, e.g. ICT, psychology, or marketing, often neglect to describe them with a level of detail sufficient to situate their research on the IVR landscape. Careful reporting of these conditions may increase the applicability of research results and their impact on the shared body of knowledge on HCI and IVR. Based on literature review, our experience, practice and a synthesis of key IVR factors, in this article we present a reference checklist for describing research conditions of IVR experiments. Including these in publications will contribute to the comparability of IVR research and help other researchers decide to what extent reported results are relevant to their own research goals. The compiled checklist is a ready-to-use reference tool and takes into account key hardware, software and human factors as well as diverse factors connected to visual, audio, tactile, and other aspects of interaction.
The Amazon Robotics Challenge enlisted sixteen teams to each design a pick-and-place robot for autonomous warehousing, addressing development in robotic vision and manipulation. This paper presents the design of our custom-built, cost-effective, Cartesian robot system Cartman, which won first place in the competition finals by stowing 14 (out of 16) and picking all 9 items in 27 minutes, scoring a total of 272 points. We highlight our experience-centred design methodology and key aspects of our system that contributed to our competitiveness. We believe these aspects are crucial to building robust and effective robotic systems.
Data credibility is a crucial issue in mobile crowd sensing (MCS) and, more generally, people-centric Internet of Things (IoT). Prior work takes approaches such as incentive mechanism design and data mining to address this issue, while overlooking the power of crowds itself, which we exploit in this paper. In particular, we propose a cross validation approach which seeks a validating crowd to verify the data credibility of the original sensing crowd, and uses the verification result to reshape the original sensing dataset into a more credible posterior belief of the ground truth. Following this approach, we design a specific cross validation mechanism, which integrates four sampling techniques with a privacy-aware competency-adaptive push (PACAP) algorithm and is applicable to time-sensitive and quality-critical MCS applications. It does not require redesigning a new MCS system but rather functions as a lightweight plug-in, making it easier for practical adoption. Our results demonstrate that the proposed mechanism substantially improves data credibility in terms of both reinforcing obscure truths and scavenging hidden truths.
Human cognitive performance is critical to productivity, learning, and accident avoidance. Cognitive performance varies throughout each day and is in part driven by intrinsic, near 24-hour circadian rhythms. Prior research on the impact of sleep and circadian rhythms on cognitive performance has typically been restricted to small-scale laboratory-based studies that do not capture the variability of real-world conditions, such as environmental factors, motivation, and sleep patterns in real-world settings. Given these limitations, leading sleep researchers have called for larger in situ monitoring of sleep and performance. We present the largest study to date on the impact of objectively measured real-world sleep on performance enabled through a reframing of everyday interactions with a web search engine as a series of performance tasks. Our analysis includes 3 million nights of sleep and 75 million interaction tasks. We measure cognitive performance through the speed of keystroke and click interactions on a web search engine and correlate them to wearable device-defined sleep measures over time. We demonstrate that real-world performance varies throughout the day and is influenced by both circadian rhythms, chronotype (morning/evening preference), and prior sleep duration and timing. We develop a statistical model that operationalizes a large body of work on sleep and performance and demonstrates that our estimates of circadian rhythms, homeostatic sleep drive, and sleep inertia align with expectations from laboratory-based sleep studies. Further, we quantify the impact of insufficient sleep on real-world performance and show that two consecutive nights with less than six hours of sleep are associated with decreases in performance which last for a period of six days. This work demonstrates the feasibility of using online interactions for large-scale physiological sensing.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا