Do you want to publish a course? Click here

A Deformable Interface for Human Touch Recognition using Stretchable Carbon Nanotube Dielectric Elastomer Sensors and Deep Neural Networks

148   0   0.0 ( 0 )
 Added by Chris Larson
 Publication date 2017
and research's language is English




Ask ChatGPT about the research

User interfaces provide an interactive window between physical and virtual environments. A new concept in the field of human-computer interaction is a soft user interface; a compliant surface that facilitates touch interaction through deformation. Despite the potential of these interfaces, they currently lack a signal processing framework that can efficiently extract information from their deformation. Here we present OrbTouch, a device that uses statistical learning algorithms, based on convolutional neural networks, to map deformations from human touch to categorical labels (i.e., gestures) and touch location using stretchable capacitor signals as inputs. We demonstrate this approach by using the device to control the popular game Tetris. OrbTouch provides a modular, robust framework to interpret deformation in soft media, laying a foundation for new modes of human computer interaction through shape changing solids.



rate research

Read More

This work describes a new human-in-the-loop (HitL) assistive grasping system for individuals with varying levels of physical capabilities. We investigated the feasibility of using four potential input devices with our assistive grasping system interface, using able-bodied individuals to define a set of quantitative metrics that could be used to assess an assistive grasping system. We then took these measurements and created a generalized benchmark for evaluating the effectiveness of any arbitrary input device into a HitL grasping system. The four input devices were a mouse, a speech recognition device, an assistive switch, and a novel sEMG device developed by our group that was connected either to the forearm or behind the ear of the subject. These preliminary results provide insight into how different interface devices perform for generalized assistive grasping tasks and also highlight the potential of sEMG based control for severely disabled individuals.
Fatigue, sleepiness and disturbed sleep are important factors in health and safety in modern society and there is considerable interest in developing technologies for routine monitoring of associated physiological indicators. Electrophysiology, the measurement of the electrical activity of biological origin, is a key technique for the measurement of physiological parameters in several applications, but it has been traditionally difficult to develop sensors for measurements outside the laboratory or clinic with the required quality and robustness. In this paper we report the results from first human experiments using a new electrophysiology sensor called ENOBIO, using carbon nanotube arrays for penetration of the outer layers of the skin and improved electrical contact. These tests, which have included traditional protocols for the analysis of the electrical activity of the brain--spontaneous EEG and ERP--indicate performance on a par with state of the art research-oriented wet electrodes, suggesting that the envisioned mechanism--skin penetration--is responsible. No ill side-effects have been observed six months after the tests, and the subject did not report any pain or special sensations on application of the electrode.
79 - Fanyi Xiao , Ling Pei , Lei Chu 2020
Sensor-based human activity recognition (HAR) is now a research hotspot in multiple application areas. With the rise of smart wearable devices equipped with inertial measurement units (IMUs), researchers begin to utilize IMU data for HAR. By employing machine learning algorithms, early IMU-based research for HAR can achieve accurate classification results on traditional classical HAR datasets, containing only simple and repetitive daily activities. However, these datasets rarely display a rich diversity of information in real-scene. In this paper, we propose a novel method based on deep learning for complex HAR in the real-scene. Specially, in the off-line training stage, the AMASS dataset, containing abundant human poses and virtual IMU data, is innovatively adopted for enhancing the variety and diversity. Moreover, a deep convolutional neural network with an unsupervised penalty is proposed to automatically extract the features of AMASS and improve the robustness. In the on-line testing stage, by leveraging advantages of the transfer learning, we obtain the final result by fine-tuning the partial neural network (optimizing the parameters in the fully-connected layers) using the real IMU data. The experimental results show that the proposed method can surprisingly converge in a few iterations and achieve an accuracy of 91.15% on a real IMU dataset, demonstrating the efficiency and effectiveness of the proposed method.
We review the current technology underlying surface haptics that converts passive touch surfaces to active ones (machine haptics), our perception of tactile stimuli displayed through active touch surfaces (human haptics), their potential applications (human-machine interaction), and finally the challenges ahead of us in making them available through commercial systems. This review primarily covers the tactile interactions of human fingers or hands with surface-haptics displays by focusing on the three most popular actuation methods: vibrotactile, electrostatic, and ultrasonic.
The vast proliferation of sensor devices and Internet of Things enables the applications of sensor-based activity recognition. However, there exist substantial challenges that could influence the performance of the recognition system in practical scenarios. Recently, as deep learning has demonstrated its effectiveness in many areas, plenty of deep methods have been investigated to address the challenges in activity recognition. In this study, we present a survey of the state-of-the-art deep learning methods for sensor-based human activity recognition. We first introduce the multi-modality of the sensory data and provide information for public datasets that can be used for evaluation in different challenge tasks. We then propose a new taxonomy to structure the deep methods by challenges. Challenges and challenge-related deep methods are summarized and analyzed to form an overview of the current research progress. At the end of this work, we discuss the open issues and provide some insights for future directions.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا