ترغب بنشر مسار تعليمي؟ اضغط هنا

Semi-Supervised Intent Inferral Using Ipsilateral Biosignals on a Hand Orthosis for Stroke Subjects

336   0   0.0 ( 0 )
 نشر من قبل Cassie Meeker
 تاريخ النشر 2020
والبحث باللغة English




اسأل ChatGPT حول البحث

In order to provide therapy in a functional context, controls for wearable orthoses need to be robust and intuitive. We have previously introduced an intuitive, user-driven, EMG based orthotic control, but the process of training a control which is robust to concept drift (changes in the input signal) places a substantial burden on the user. In this paper, we explore semi-supervised learning as a paradigm for wearable orthotic controls. We are the first to use semi-supervised learning for an orthotic application. We propose a K-means semi-supervision and a disagreement-based semi-supervision algorithm. This is an exploratory study designed to determine the feasibility of semi-supervised learning as a control paradigm for wearable orthotics. In offline experiments with stroke subjects, we show that these algorithms have the potential to reduce the training burden placed on the user, and that they merit further study.

قيم البحث

اقرأ أيضاً

Wearable robotic hand rehabilitation devices can allow greater freedom and flexibility than their workstation-like counterparts. However, the field is generally lacking effective methods by which the user can operate the device: such controls must be effective, intuitive, and robust to the wide range of possible impairment patterns. Even when focusing on a specific condition, such as stroke, the variety of encountered upper limb impairment patterns means that a single sensing modality, such as electromyography (EMG), might not be sufficient to enable controls for a broad range of users. To address this significant gap, we introduce a multimodal sensing and interaction paradigm for an active hand orthosis. In our proof-of-concept implementation, EMG is complemented by other sensing modalities, such as finger bend and contact pressure sensors. We propose multimodal interaction methods that utilize this sensory data as input, and show they can enable tasks for stroke survivors who exhibit different impairment patterns. We believe that robotic hand orthoses developed as multimodal sensory platforms with help address some of the key challenges in physical interaction with the user.
This paper presents a teleoperation system that includes robot perception and intent prediction from hand gestures. The perception module identifies the objects present in the robot workspace and the intent prediction module which object the user lik ely wants to grasp. This architecture allows the approach to rely on traded control instead of direct control: we use hand gestures to specify the goal objects for a sequential manipulation task, the robot then autonomously generates a grasping or a retrieving motion using trajectory optimization. The perception module relies on the model-based tracker to precisely track the 6D pose of the objects and makes use of a state of the art learning-based object detection and segmentation method, to initialize the tracker by automatically detecting objects in the scene. Goal objects are identified from user hand gestures using a trained a multi-layer perceptron classifier. After presenting all the components of the system and their empirical evaluation, we present experimental results comparing our pipeline to a direct traded control approach (i.e., one that does not use prediction) which shows that using intent prediction allows to bring down the overall task execution time.
We studied the performance of a robotic orthosis designed to assist the paretic hand after stroke. It is wearable and fully user-controlled, serving two possible roles: as a therapeutic tool that facilitates device mediated hand exercises to recover neuromuscular function or as an assistive device for use in everyday activities to aid functional use of the hand. We present the clinical outcomes of a pilot study designed as a feasibility test for these hypotheses. 11 chronic stroke (> 2 years) patients with moderate muscle tone (Modified Ashworth Scale less than or equal to 2 in upper extremity) engaged in a month-long training protocol using the orthosis. Individuals were evaluated using standardized outcome measures, both with and without orthosis assistance. Fugl-Meyer post intervention scores without robotic assistance showed improvement focused specifically at the distal joints of the upper limb, suggesting the use of the orthosis as a rehabilitative device for the hand. Action Research Arm Test scores post intervention with robotic assistance showed that the device may serve an assistive role in grasping tasks. These results highlight the potential for wearable and user-driven robotic hand orthoses to extend the use and training of the affected upper limb after stroke.
Datasets for biosignals, such as electroencephalogram (EEG) and electrocardiogram (ECG), often have noisy labels and have limited number of subjects (<100). To handle these challenges, we propose a self-supervised approach based on contrastive learni ng to model biosignals with a reduced reliance on labeled data and with fewer subjects. In this regime of limited labels and subjects, intersubject variability negatively impacts model performance. Thus, we introduce subject-aware learning through (1) a subject-specific contrastive loss, and (2) an adversarial training to promote subject-invariance during the self-supervised learning. We also develop a number of time-series data augmentation techniques to be used with the contrastive loss for biosignals. Our method is evaluated on publicly available datasets of two different biosignals with different tasks: EEG decoding and ECG anomaly detection. The embeddings learned using self-supervision yield competitive classification results compared to entirely supervised methods. We show that subject-invariance improves representation quality for these tasks, and observe that subject-specific loss increases performance when fine-tuning with supervised labels.
We present semi-supervised deep learning approaches for traversability estimation from fisheye images. Our method, GONet, and the proposed extensions leverage Generative Adversarial Networks (GANs) to effectively predict whether the area seen in the input image(s) is safe for a robot to traverse. These methods are trained with many positive images of traversable places, but just a small set of negative images depicting blocked and unsafe areas. This makes the proposed methods practical. Positive examples can be collected easily by simply operating a robot through traversable spaces, while obtaining negative examples is time consuming, costly, and potentially dangerous. Through extensive experiments and several demonstrations, we show that the proposed traversability estimation approaches are robust and can generalize to unseen scenarios. Further, we demonstrate that our methods are memory efficient and fast, allowing for real-time operation on a mobile robot with single or stereo fisheye cameras. As part of our contributions, we open-source two new datasets for traversability estimation. These datasets are composed of approximately 24h of videos from more than 25 indoor environments. Our methods outperform baseline approaches for traversability estimation on these new datasets.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا