No Arabic abstract
We present pneumatic shape-shifting fingers to enable a simple parallel-jaw gripper for different manipulation modalities. By changing the finger geometry, the gripper effectively changes the contact type between the fingers and an object to facilitate distinct manipulation primitives. In this paper, we demonstrate the development and application of shape-shifting fingers to reorient and grasp cylindrical objects. The shape of the fingers changes based on the air pressure inside them and attains two distinct geometric forms at high and low pressure values. In our implementation, the finger shape switches between a wedge-shaped geometry and V-shaped geometry at high and low pressure, respectively. Using the wedge-shaped geometry, the fingers provide a point contact on a cylindrical object to pivot it to a vertical pose under the effect of gravity. By changing to V-shaped geometry, the fingers localize the object in the vertical pose and securely hold it. Experimental results show that the smooth transition between the two contact types allows a robot with a simple gripper to reorient a cylindrical object lying horizontally on a ground and to grasp it in a vertical pose.
Currently, robotic grasping methods based on sparse partial point clouds have attained a great grasping performance on various objects while they often generate wrong grasping candidates due to the lack of geometric information on the object. In this work, we propose a novel and robust shape completion model (TransSC). This model has a transformer-based encoder to explore more point-wise features and a manifold-based decoder to exploit more object details using a partial point cloud as input. Quantitative experiments verify the effectiveness of the proposed shape completion network and demonstrate it outperforms existing methods. Besides, TransSC is integrated into a grasp evaluation network to generate a set of grasp candidates. The simulation experiment shows that TransSC improves the grasping generation result compared to the existing shape completion baselines. Furthermore, our robotic experiment shows that with TransSC the robot is more successful in grasping objects that are randomly placed on a support surface.
Reliable robotic grasping in unstructured environments is a crucial but challenging task. The main problem is to generate the optimal grasp of novel objects from partial noisy observations. This paper presents an end-to-end grasp detection network taking one single-view point cloud as input to tackle the problem. Our network includes three stages: Score Network (SN), Grasp Region Network (GRN), and Refine Network (RN). Specifically, SN regresses point grasp confidence and selects positive points with high confidence. Then GRN conducts grasp proposal prediction on the selected positive points. RN generates more accurate grasps by refining proposals predicted by GRN. To further improve the performance, we propose a grasp anchor mechanism, in which grasp anchors with assigned gripper orientations are introduced to generate grasp proposals. Experiments demonstrate that REGNet achieves a success rate of 79.34% and a completion rate of 96% in real-world clutter, which significantly outperforms several state-of-the-art point-cloud based methods, including GPD, PointNetGPD, and S4G. The code is available at https://github.com/zhaobinglei/REGNet_for_3D_Grasping.
Pneumatic muscle actuators (PMA) are easy-to-fabricate, lightweight, compliant, and have high power-to-weight ratio, thus making them the ideal actuation choice for many soft and continuum robots. But so far, limited work has been carried out in dynamic control of PMAs. One reason is that PMAs are highly hysteretic. Coupled with their high compliance and response lag, PMAs are challenging to control, particularly when subjected to external loads. The hysteresis models proposed to-date rely on many physical and mechanical parameters that are difficult to measure reliably and therefore of limited use for implementing dynamic control. In this work, we employ a Bouc-Wen hysteresis modeling approach to account for the hysteresis of PMAs and use the model for implementing dynamic control. The controller is then compared to PID feedback control for a number of dynamic position tracking tests. The dynamic control based on the Bouc-Wen hysteresis model shows significantly better tracking performance. This work lays the foundation towards implementing dynamic control for PMA-powered high degrees of freedom soft and continuum robots.
After a grasp has been planned, if the object orientation changes, the initial grasp may but not always have to be modified to accommodate the orientation change. For example, rotation of a cylinder by any amount around its centerline does not change its geometric shape relative to the grasper. Objects that can be approximated to solids of revolution or contain other geometric symmetries are prevalent in everyday life, and this information can be employed to improve the efficiency of existing grasp planning models. This paper experimentally investigates change in human-planned grasps under varied object orientations. With 13,440 recorded human grasps, our results indicate that during pick-and-place task of ordinary objects, stable grasps can be achieved with a small subset of grasp types, and the wrist-related parameters follow normal distribution. Furthermore, we show this knowledge can allow faster convergence of grasp planning algorithm.
Robotic grasp detection is a fundamental capability for intelligent manipulation in unstructured environments. Previous work mainly employed visual and tactile fusion to achieve stable grasp, while, the whole process depending heavily on regrasping, which wastes much time to regulate and evaluate. We propose a novel way to improve robotic grasping: by using learned tactile knowledge, a robot can achieve a stable grasp from an image. First, we construct a prior tactile knowledge learning framework with novel grasp quality metric which is determined by measuring its resistance to external perturbations. Second, we propose a multi-phases Bayesian Grasp architecture to generate stable grasp configurations through a single RGB image based on prior tactile knowledge. Results show that this framework can classify the outcome of grasps with an average accuracy of 86% on known objects and 79% on novel objects. The prior tactile knowledge improves the successful rate of 55% over traditional vision-based strategies.