Do you want to publish a course? Click here

Impact of delayed response on Wearable Cognitive Assistance

69   0   0.0 ( 0 )
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

Wearable Cognitive Assistants (WCA) are anticipated to become a widely-used application class, in conjunction with emerging network infrastructures like 5G that incorporate edge computing capabilities. While prototypical studies of such applications exist today, the relationship between infrastructure service provisioning and its implication for WCA usability is largely unexplored despite the relevance that these applications have for future networks. This paper presents an experimental study assessing how WCA users react to varying end-to-end delays induced by the application pipeline or infrastructure. Participants interacted directly with an instrumented task-guidance WCA as delays were introduced into the system in a controllable fashion. System and task state were tracked in real time, and biometric data from wearable sensors on the participants were recorded. Our results show that periods of extended system delay cause users to correspondingly (and substantially) slow down in their guided task execution, an effect that persists for a time after the system returns to a more responsive state. Furthermore, the slow-down in task execution is correlated with a personality trait, neuroticism, associated with intolerance for time delays. We show that our results implicate impaired cognitive planning, as contrasted with resource depletion or emotional arousal, as the reason for slowed user task executions under system delay. The findings have several implications for the design and operation of WCA applications as well as computational and communication infrastructure, and additionally for the development of performance analysis tools for WCA.



rate research

Read More

Crowdsourcing can identify high-quality solutions to problems; however, individual decisions are constrained by cognitive biases. We investigate some of these biases in an experimental model of a question-answering system. In both natural and controlled experiments, we observe a strong position bias in favor of answers appearing earlier in a list of choices. This effect is enhanced by three cognitive factors: the attention an answer receives, its perceived popularity, and cognitive load, measured by the number of choices a user has to process. While separately weak, these effects synergistically amplify position bias and decouple user choices of best answers from their intrinsic quality. We end our paper by discussing the novel ways we can apply these findings to substantially improve how high-quality answers are found in question-answering systems.
Human-computer interaction (HCI) is crucial for the safety of lives as autonomous vehicles (AVs) become commonplace. Yet, little effort has been put toward ensuring that AVs understand humans on the road. In this paper, we present GLADAS, a simulator-based research platform designed to teach AVs to understand pedestrian hand gestures. GLADAS supports the training, testing, and validation of deep learning-based self-driving car gesture recognition systems. We focus on gestures as they are a primordial (i.e, natural and common) way to interact with cars. To the best of our knowledge, GLADAS is the first system of its kind designed to provide an infrastructure for further research into human-AV interaction. We also develop a hand gesture recognition algorithm for self-driving cars, using GLADAS to evaluate its performance. Our results show that an AV understands human gestures 85.91% of the time, reinforcing the need for further research into human-AV interaction.
The human body is punctuated with wide array of sensory systems that provide a high evolutionary advantage by facilitating formation of a detailed picture of the immediate surroundings. The sensors range across a wide spectrum, acquiring input from non-contact audio-visual means to contact based input via pressure and temperature. The ambit of sensing can be extended further by imparting the body with increased non-contact sensing capability through the phenomenon of electrostatics. Here we present graphene-based tattoo sensor for proximity sensing, employing the principle of electrostatic gating. The sensor shows a remarkable change in resistance upon exposure to objects surrounded with static charge on them. Compared to prior work in this field, the sensor has demonstrated the highest recorded proximity detection range of 20 cm. It is ultra-thin, highly skin conformal and comes with a facile transfer process such that it can be tattooed on highly curvilinear rough substrates like the human skin, unlike other graphene-based proximity sensors reported before. Present work details the operation of wearable proximity sensor while exploring the effect of mounting body on the working mechanism. A possible role of the sensor as an alerting system against unwarranted contact with objects in public places especially during the current SARS-CoV-2 pandemic has also been explored in the form of an LED bracelet whose color is controlled by the proximity sensor attached to it.
This paper presents a passive control method for multiple degrees of freedom in a soft pneumatic robot through the combination of flow resistor tubes with series inflatable actuators. We designed and developed these 3D printed resistors based on the pressure drop principle of multiple capillary orifices, which allows a passive control of its sequential activation from a single source of pressure. Our design fits in standard tube connectors, making it easy to adopt it on any other type of actuator with pneumatic inlets. We present its characterization of pressure drop and evaluation of the activation sequence for series and parallel circuits of actuators. Moreover, we present an application for the assistance of postural transition from lying to sitting. We embedded it in a wearable garment robot-suit designed for infants with cerebral palsy. Then, we performed the test with a dummy baby for emulating the upper-body motion control. The results show a sequential motion control of the sitting and lying transitions validating the proposed system for flow control and its application on the robot-suit.
With the development of advanced communication technology, connected vehicles become increasingly popular in our transportation systems, which can conduct cooperative maneuvers with each other as well as road entities through vehicle-to-everything communication. A lot of research interests have been drawn to other building blocks of a connected vehicle system, such as communication, planning, and control. However, less research studies were focused on the human-machine cooperation and interface, namely how to visualize the guidance information to the driver as an advanced driver-assistance system (ADAS). In this study, we propose an augmented reality (AR)-based ADAS, which visualizes the guidance information calculated cooperatively by multiple connected vehicles. An unsignalized intersection scenario is adopted as the use case of this system, where the driver can drive the connected vehicle crossing the intersection under the AR guidance, without any full stop at the intersection. A simulation environment is built in Unity game engine based on the road network of San Francisco, and human-in-the-loop (HITL) simulation is conducted to validate the effectiveness of our proposed system regarding travel time and energy consumption.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا