ترغب بنشر مسار تعليمي؟ اضغط هنا

Joint Inference of Kinematic and Force Trajectories with Visuo-Tactile Sensing

65   0   0.0 ( 0 )
 نشر من قبل Alexander Lambert
 تاريخ النشر 2019
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

To perform complex tasks, robots must be able to interact with and manipulate their surroundings. One of the key challenges in accomplishing this is robust state estimation during physical interactions, where the state involves not only the robot and the object being manipulated, but also the state of the contact itself. In this work, within the context of planar pushing, we extend previous inference-based approaches to state estimation in several ways. We estimate the robot, object, and the contact state on multiple manipulation platforms configured with a vision-based articulated model tracker, and either a biomimetic tactile sensor or a force-torque sensor. We show how to fuse raw measurements from the tracker and tactile sensors to jointly estimate the trajectory of the kinematic states and the forces in the system via probabilistic inference on factor graphs, in both batch and incremental settings. We perform several benchmarks with our framework and show how performance is affected by incorporating various geometric and physics based constraints, occluding vision sensors, or injecting noise in tactile sensors. We also compare with prior work on multiple datasets and demonstrate that our approach can effectively optimize over multi-modal sensor data and reduce uncertainty to find better state estimates.

قيم البحث

اقرأ أيضاً

Estimation of tactile properties from vision, such as slipperiness or roughness, is important to effectively interact with the environment. These tactile properties help us decide which actions we should choose and how to perform them. E.g., we can d rive slower if we see that we have bad traction or grasp tighter if an item looks slippery. We believe that this ability also helps robots to enhance their understanding of the environment, and thus enables them to tailor their actions to the situation at hand. We therefore propose a model to estimate the degree of tactile properties from visual perception alone (e.g., the level of slipperiness or roughness). Our method extends a encoder-decoder network, in which the latent variables are visual and tactile features. In contrast to previous works, our method does not require manual labeling, but only RGB images and the corresponding tactile sensor data. All our data is collected with a webcam and uSkin tactile sensor mounted on the end-effector of a Sawyer robot, which strokes the surfaces of 25 different materials. We show that our model generalizes to materials not included in the training data by evaluating the feature space, indicating that it has learned to associate important tactile properties with images.
There are a wide range of features that tactile contact provides, each with different aspects of information that can be used for object grasping, manipulation, and perception. In this paper inference of some key tactile features, tip displacement, c ontact location, shear direction and magnitude, is demonstrated by introducing a novel method of transducing a third dimension to the sensor data via Voronoi tessellation. The inferred features are displayed throughout the work in a new visualisation mode derived from the Voronoi tessellation; these visualisations create easier interpretation of data from an optical tactile sensor that measures local shear from displacement of internal pins (the TacTip). The output values of tip displacement and shear magnitude are calibrated to appropriate mechanical units and validate the direction of shear inferred from the sensor. We show that these methods can infer the direction of shear to $sim$2.3$^{circ}$ without the need for training a classifier or regressor. The approach demonstrated here will increase the versatility and generality of the sensors and thus allow sensor to be used in more unstructured and unknown environments, as well as improve the use of these tactile sensors in more complex systems such as robot hands.
Using simulation to train robot manipulation policies holds the promise of an almost unlimited amount of training data, generated safely out of harms way. One of the key challenges of using simulation, to date, has been to bridge the reality gap, so that policies trained in simulation can be deployed in the real world. We explore the reality gap in the context of learning a contextual policy for multi-fingered robotic grasping. We propose a Grasping Objects Approach for Tactile (GOAT) robotic hands, learning to overcome the reality gap problem. In our approach we use human hand motion demonstration to initialize and reduce the search space for learning. We contextualize our policy with the bounding cuboid dimensions of the object of interest, which allows the policy to work on a more flexible representation than directly using an image or point cloud. Leveraging fingertip touch sensors in the hand allows the policy to overcome the reduction in geometric information introduced by the coarse bounding box, as well as pose estimation uncertainty. We show our learned policy successfully runs on a real robot without any fine tuning, thus bridging the reality gap.
This work presents a new version of the tactile-sensing finger GelSlim 3.0, which integrates the ability to sense high-resolution shape, force, and slip in a compact form factor for use with small parallel jaw grippers in cluttered bin-picking scenar ios. The novel design incorporates the capability to use real-time analytic methods to measure shape, estimate the contact 3D force distribution, and detect incipient slip. To achieve a compact integration, we optimize the optical path from illumination source to camera and other geometric variables in a optical simulation environment. In particular, we optimize the illumination sources and a light shaping lens around the constraints imposed by the photometric stereo algorithm used for depth reconstruction. The optimized optical configuration is integrated into a finger design composed of robust and easily replaceable snap-to-fit fingetip module that allow for ease of manufacture, assembly, use, and repair. To stimulate future research in tactile-sensing and provide the robotics community access to reliable and easily-reproducible tactile finger with a diversity of sensing modalities, we open-source the design and software at https://github.com/mcubelab/gelslim.
Enabling robots to work in close proximity with humans necessitates to employ not only multi-sensory information for coordinated and autonomous interactions but also a control framework that ensures adaptive and flexible collaborative behavior. Such a control framework needs to integrate accuracy and repeatability of robots with cognitive ability and adaptability of humans for co-manipulation. In this regard, an intuitive stack of tasks (iSOT) formulation is proposed, that defines the robots actions based on human ergonomics and task progress. The framework is augmented with visuo-tactile perception for flexible interaction and autonomous adaption. The visual information using depth cameras, monitors and estimates the object pose and human arm gesture while the tactile feedback provides exploration skills for maintaining the desired contact to avoid slippage. Experiments conducted on robot system with human partnership for assembly and disassembly tasks confirm the effectiveness and usability of proposed framework.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا