Do you want to publish a course? Click here

User-Driven Functional Movement Training with a Wearable Hand Robot after Stroke

347   0   0.0 ( 0 )
 Added by Sangwoo Park
 Publication date 2019
and research's language is English




Ask ChatGPT about the research

We studied the performance of a robotic orthosis designed to assist the paretic hand after stroke. It is wearable and fully user-controlled, serving two possible roles: as a therapeutic tool that facilitates device mediated hand exercises to recover neuromuscular function or as an assistive device for use in everyday activities to aid functional use of the hand. We present the clinical outcomes of a pilot study designed as a feasibility test for these hypotheses. 11 chronic stroke (> 2 years) patients with moderate muscle tone (Modified Ashworth Scale less than or equal to 2 in upper extremity) engaged in a month-long training protocol using the orthosis. Individuals were evaluated using standardized outcome measures, both with and without orthosis assistance. Fugl-Meyer post intervention scores without robotic assistance showed improvement focused specifically at the distal joints of the upper limb, suggesting the use of the orthosis as a rehabilitative device for the hand. Action Research Arm Test scores post intervention with robotic assistance showed that the device may serve an assistive role in grasping tasks. These results highlight the potential for wearable and user-driven robotic hand orthoses to extend the use and training of the affected upper limb after stroke.



rate research

Read More

Tactile sensing is used by humans when grasping to prevent us dropping objects. One key facet of tactile sensing is slip detection, which allows a gripper to know when a grasp is failing and take action to prevent an object being dropped. This study demonstrates the slip detection capabilities of the recently developed Tactile Model O (T-MO) by using support vector machines to detect slip and test multiple slip scenarios including responding to the onset of slip in real time with eleven different objects in various grasps. We demonstrate the benefits of slip detection in grasping by testing two real-world scenarios: adding weight to destabilise a grasp and using slip detection to lift up objects at the first attempt. The T-MO is able to detect when an object is slipping, react to stabilise the grasp and be deployed in real-world scenarios. This shows the T-MO is a suitable platform for autonomous grasping by using reliable slip detection to ensure a stable grasp in unstructured environments. Supplementary video: https://youtu.be/wOwFHaiHuKY
Teaching an anthropomorphic robot from human example offers the opportunity to impart humanlike qualities on its movement. In this work we present a reinforcement learning based method for teaching a real world bipedal robot to perform movements directly from human motion capture data. Our method seamlessly transitions from training in a simulation environment to executing on a physical robot without requiring any real world training iterations or offline steps. To overcome the disparity in joint configurations between the robot and the motion capture actor, our method incorporates motion re-targeting into the training process. Domain randomization techniques are used to compensate for the differences between the simulated and physical systems. We demonstrate our method on an internally developed humanoid robot with movements ranging from a dynamic walk cycle to complex balancing and waving. Our controller preserves the style imparted by the motion capture data and exhibits graceful failure modes resulting in safe operation for the robot. This work was performed for research purposes only.
In order to provide therapy in a functional context, controls for wearable orthoses need to be robust and intuitive. We have previously introduced an intuitive, user-driven, EMG based orthotic control, but the process of training a control which is robust to concept drift (changes in the input signal) places a substantial burden on the user. In this paper, we explore semi-supervised learning as a paradigm for wearable orthotic controls. We are the first to use semi-supervised learning for an orthotic application. We propose a K-means semi-supervision and a disagreement-based semi-supervision algorithm. This is an exploratory study designed to determine the feasibility of semi-supervised learning as a control paradigm for wearable orthotics. In offline experiments with stroke subjects, we show that these algorithms have the potential to reduce the training burden placed on the user, and that they merit further study.
Infants spontaneous and voluntary movements mirror developmental integrity of brain networks since they require coordinated activation of multiple sites in the central nervous system. Accordingly, early detection of infants with atypical motor development holds promise for recognizing those infants who are at risk for a wide range of neurodevelopmental disorders (e.g., cerebral palsy, autism spectrum disorders). Previously, novel wearable technology has shown promise for offering efficient, scalable and automated methods for movement assessment in adults. Here, we describe the development of an infant wearable, a multi-sensor smart jumpsuit that allows mobile accelerometer and gyroscope data collection during movements. Using this suit, we first recorded play sessions of 22 typically developing infants of approximately 7 months of age. These data were manually annotated for infant posture and movement based on video recordings of the sessions, and using a novel annotation scheme specifically designed to assess the overall movement pattern of infants in the given age group. A machine learning algorithm, based on deep convolutional neural networks (CNNs) was then trained for automatic detection of posture and movement classes using the data and annotations. Our experiments show that the setup can be used for quantitative tracking of infant movement activities with a human equivalent accuracy, i.e., it meets the human inter-rater agreement levels in infant posture and movement classification. We also quantify the ambiguity of human observers in analyzing infant movements, and propose a method for utilizing this uncertainty for performance improvements in training of the automated classifier. Comparison of different sensor configurations also shows that four-limb recording leads to the best performance in posture and movement classification.
In this paper, we present a multimodal mobile teleoperation system that consists of a novel vision-based hand pose regression network (Transteleop) and an IMU-based arm tracking method. Transteleop observes the human hand through a low-cost depth camera and generates not only joint angles but also depth images of paired robot hand poses through an image-to-image translation process. A keypoint-based reconstruction loss explores the resemblance in appearance and anatomy between human and robotic hands and enriches the local features of reconstructed images. A wearable camera holder enables simultaneous hand-arm control and facilitates the mobility of the whole teleoperation system. Network evaluation results on a test dataset and a variety of complex manipulation tasks that go beyond simple pick-and-place operations show the efficiency and stability of our multimodal teleoperation system.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا