ترغب بنشر مسار تعليمي؟ اضغط هنا

Multimodal Sensing and Interaction for a Robotic Hand Orthosis

329   0   0.0 ( 0 )
 نشر من قبل Sangwoo Park
 تاريخ النشر 2018
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Wearable robotic hand rehabilitation devices can allow greater freedom and flexibility than their workstation-like counterparts. However, the field is generally lacking effective methods by which the user can operate the device: such controls must be effective, intuitive, and robust to the wide range of possible impairment patterns. Even when focusing on a specific condition, such as stroke, the variety of encountered upper limb impairment patterns means that a single sensing modality, such as electromyography (EMG), might not be sufficient to enable controls for a broad range of users. To address this significant gap, we introduce a multimodal sensing and interaction paradigm for an active hand orthosis. In our proof-of-concept implementation, EMG is complemented by other sensing modalities, such as finger bend and contact pressure sensors. We propose multimodal interaction methods that utilize this sensory data as input, and show they can enable tasks for stroke survivors who exhibit different impairment patterns. We believe that robotic hand orthoses developed as multimodal sensory platforms with help address some of the key challenges in physical interaction with the user.



قيم البحث

اقرأ أيضاً

In order to provide therapy in a functional context, controls for wearable orthoses need to be robust and intuitive. We have previously introduced an intuitive, user-driven, EMG based orthotic control, but the process of training a control which is r obust to concept drift (changes in the input signal) places a substantial burden on the user. In this paper, we explore semi-supervised learning as a paradigm for wearable orthotic controls. We are the first to use semi-supervised learning for an orthotic application. We propose a K-means semi-supervision and a disagreement-based semi-supervision algorithm. This is an exploratory study designed to determine the feasibility of semi-supervised learning as a control paradigm for wearable orthotics. In offline experiments with stroke subjects, we show that these algorithms have the potential to reduce the training burden placed on the user, and that they merit further study.
This paper presents preliminary results of the design, development, and evaluation of a hand rehabilitation glove fabricated using lobster-inspired hybrid design with rigid and soft components for actuation. Inspired by the bending abdomen of lobster s, hybrid actuators are built with serially jointed rigid shells actuated by pressurized soft chambers inside to generate bending motions. Such bio-inspiration absorbs features from the classical rigid-bodied robotics with precisely-defined motion generation, as well as the emerging soft robotics with light-weight, physically safe, and adaptive actuation. The fabrication procedure is described, followed by experiments to mechanically characterize these actuators. Finally, an open-palm glove design integrated with these hybrid actuators is presented for a qualitative case study. A hand rehabilitation system is developed by learning patterns of the sEMG signals from the users forearm to train the assistive glove for hand rehabilitation exercises.
298 - Li Tian , Hanhui Li , Qifa Wang 2020
Most current anthropomorphic robotic hands can realize part of the human hand functions, particularly for object grasping. However, due to the complexity of the human hand, few current designs target at daily object manipulations, even for simple act ions like rotating a pen. To tackle this problem, we introduce a gesture based framework, which adopts the widely-used 33 grasping gestures of Feix as the bases for hand design and implementation of manipulation. In the proposed framework, we first measure the motion ranges of human fingers for each gesture, and based on the results, we propose a simple yet dexterous robotic hand design with 13 degrees of actuation. Furthermore, we adopt a frame interpolation based method, in which we consider the base gestures as the key frames to represent a manipulation task, and use the simple linear interpolation strategy to accomplish the manipulation. To demonstrate the effectiveness of our framework, we define a three-level benchmark, which includes not only 62 test gestures from previous research, but also multiple complex and continuous actions. Experimental results on this benchmark validate the dexterity of the proposed design and our video is available in url{https://drive.google.com/file/d/1wPtkd2P0zolYSBW7_3tVMUHrZEeXLXgD/view?usp=sharing}.
Soft robotic hands and grippers are increasingly attracting attention as a robotic end-effector. Compared with rigid counterparts, they are safer for human-robot and environment-robot interactions, easier to control, lower cost and weight, and more c ompliant. Current soft robotic hands have mostly focused on the soft fingers and bending actuators. However, the palm is also essential part for grasping. In this work, we propose a novel design of soft humanoid hand with pneumatic soft fingers and soft palm. The hand is inexpensive to fabricate. The configuration of the soft palm is based on modular design which can be easily applied into actuating all kinds of soft fingers before. The splaying of the fingers, bending of the whole palm, abduction and adduction of the thumb are implemented by the soft palm. Moreover, we present a new design of soft finger, called hybrid bending soft finger (HBSF). It can both bend in the grasping axis and deflect in the side-to-side axis as human-like motion. The functions of the HBSF and soft palm were simulated by SOFA framework. And their performance was tested in experiments. The 6 fingers with 1 to 11 segments were tested and analyzed. The versatility of the soft hand is evaluated and testified by the grasping experiments in real scenario according to Feix taxonomy. And the results present the diversity of grasps and show promise for grasping a variety of objects with different shapes and weights.
In this work, we present a multimodal system for active robot-object interaction using laser-based SLAM, RGBD images, and contact sensors. In the object manipulation task, the robot adjusts its initial pose with respect to obstacles and target object s through RGBD data so it can perform object grasping in different configuration spaces while avoiding collisions, and updates the information related to the last steps of the manipulation process using the contact sensors in its hand. We perform a series of experiment to evaluate the performance of the proposed system following the the RoboCup2018 international competition regulations. We compare our approach with a number of baselines, namely a no-feedback method and visual-only and tactile-only feedback methods, where our proposed visual-and-tactile feedback method performs best.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا