No Arabic abstract
Assistive and Wearable Robotics have the potential to support humans with different types of motor impairments to become independent and fulfil their activities of daily living successfully. The success of these robot systems, however, relies on the ability to meaningfully decode human action intentions and carry them out appropriately. Neural interfaces have been explored for use in such system with several successes, however, they tend to be invasive and require training periods in the order of months. We present a robotic system for human augmentation, capable of actuating the users arm and fingers for them, effectively restoring the capability of reaching, grasping and manipulating objects; controlled solely through the users eye movements. We combine wearable eye tracking, the visual context of the environment and the structural grammar of human actions to create a cognitive-level assistive robotic setup that enables the users in fulfilling activities of daily living, while conserving interpretability, and the agency of the user. The interface is worn, calibrated and ready to use within 5 minutes. Users learn to control and make successful use of the system with an additional 5 minutes of interaction. The system is tested with 5 healthy participants, showing an average success rate of $96.6%$ on first attempt across 6 tasks.
The ability to adapt to uncertainties, recover from failures, and coordinate between hand and fingers are essential sensorimotor skills for fully autonomous robotic grasping. In this paper, we aim to study a unified feedback control policy for generating the finger actions and the motion of hand to accomplish seamlessly coordinated tasks of reaching, grasping and re-grasping. We proposed a set of quantified metrics for task-orientated rewards to guide the policy exploration, and we analyzed and demonstrated the effectiveness of each reward term. To acquire a robust re-grasping motion, we deployed different initial states in training to experience failures that the robot would encounter during grasping due to inaccurate perception or disturbances. The performance of learned policy is evaluated on three different tasks: grasping a static target, grasping a dynamic target, and re-grasping. The quality of learned grasping policy was evaluated based on success rates in different scenarios and the recovery time from failures. The results indicate that the learned policy is able to achieve stable grasps of a static or moving object. Moreover, the policy can adapt to new environmental changes on the fly and execute collision-free re-grasp after a failed attempt within a short recovery time even in difficult configurations.
We present the design, implementation, and evaluation of RF-Grasp, a robotic system that can grasp fully-occluded objects in unknown and unstructured environments. Unlike prior systems that are constrained by the line-of-sight perception of vision and infrared sensors, RF-Grasp employs RF (Radio Frequency) perception to identify and locate target objects through occlusions, and perform efficient exploration and complex manipulation tasks in non-line-of-sight settings. RF-Grasp relies on an eye-in-hand camera and batteryless RFID tags attached to objects of interest. It introduces two main innovations: (1) an RF-visual servoing controller that uses the RFIDs location to selectively explore the environment and plan an efficient trajectory toward an occluded target, and (2) an RF-visual deep reinforcement learning network that can learn and execute efficient, complex policies for decluttering and grasping. We implemented and evaluated an end-to-end physical prototype of RF-Grasp. We demonstrate it improves success rate and efficiency by up to 40-50% over a state-of-the-art baseline. We also demonstrate RF-Grasp in novel tasks such mechanical search of fully-occluded objects behind obstacles, opening up new possibilities for robotic manipulation. Qualitative results (videos) available at rfgrasp.media.mit.edu
Conventional works that learn grasping affordance from demonstrations need to explicitly predict grasping configurations, such as gripper approaching angles or grasping preshapes. Classic motion planners could then sample trajectories by using such predicted configurations. In this work, our goal is instead to fill the gap between affordance discovery and affordance-based policy learning by integrating the two objectives in an end-to-end imitation learning framework based on deep neural networks. From a psychological perspective, there is a close association between attention and affordance. Therefore, with an end-to-end neural network, we propose to learn affordance cues as visual attention that serves as a useful indicating signal of how a demonstrator accomplishes tasks, instead of explicitly modeling affordances. To achieve this, we propose a contrastive learning framework that consists of a Siamese encoder and a trajectory decoder. We further introduce a coupled triplet loss to encourage the discovered affordance cues to be more affordance-relevant. Our experimental results demonstrate that our model with the coupled triplet loss achieves the highest grasping success rate in a simulated robot environment. Our project website can be accessed at https://sites.google.com/asu.edu/affordance-aware-imitation/project.
This work provides an architecture to enable robotic grasp planning via shape completion. Shape completion is accomplished through the use of a 3D convolutional neural network (CNN). The network is trained on our own new open source dataset of over 440,000 3D exemplars captured from varying viewpoints. At runtime, a 2.5D pointcloud captured from a single point of view is fed into the CNN, which fills in the occluded regions of the scene, allowing grasps to be planned and executed on the completed object. Runtime shape completion is very rapid because most of the computational costs of shape completion are borne during offline training. We explore how the quality of completions vary based on several factors. These include whether or not the object being completed existed in the training data and how many object models were used to train the network. We also look at the ability of the network to generalize to novel objects allowing the system to complete previously unseen objects at runtime. Finally, experimentation is done both in simulation and on actual robotic hardware to explore the relationship between completion quality and the utility of the completed mesh model for grasping.
Soft robotic hands and grippers are increasingly attracting attention as a robotic end-effector. Compared with rigid counterparts, they are safer for human-robot and environment-robot interactions, easier to control, lower cost and weight, and more compliant. Current soft robotic hands have mostly focused on the soft fingers and bending actuators. However, the palm is also essential part for grasping. In this work, we propose a novel design of soft humanoid hand with pneumatic soft fingers and soft palm. The hand is inexpensive to fabricate. The configuration of the soft palm is based on modular design which can be easily applied into actuating all kinds of soft fingers before. The splaying of the fingers, bending of the whole palm, abduction and adduction of the thumb are implemented by the soft palm. Moreover, we present a new design of soft finger, called hybrid bending soft finger (HBSF). It can both bend in the grasping axis and deflect in the side-to-side axis as human-like motion. The functions of the HBSF and soft palm were simulated by SOFA framework. And their performance was tested in experiments. The 6 fingers with 1 to 11 segments were tested and analyzed. The versatility of the soft hand is evaluated and testified by the grasping experiments in real scenario according to Feix taxonomy. And the results present the diversity of grasps and show promise for grasping a variety of objects with different shapes and weights.