ترغب بنشر مسار تعليمي؟ اضغط هنا

Pose-Based Tactile Servoing: Controlled Soft Touch using Deep Learning

72   0   0.0 ( 0 )
 نشر من قبل Nathan Lepora
 تاريخ النشر 2020
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

This article describes a new way of controlling robots using soft tactile sensors: pose-based tactile servo (PBTS) control. The basic idea is to embed a tactile perception model for estimating the sensor pose within a servo control loop that is applied to local object features such as edges and surfaces. PBTS control is implemented with a soft curved optical tactile sensor (the BRL TacTip) using a convolutional neural network trained to be insensitive to shear. In consequence, robust and accurate controlled motion over various complex 3D objects is attained. First, we review tactile servoing and its relation to visual servoing, before formalising PBTS control. Then, we assess tactile servoing over a range of regular and irregular objects. Finally, we reflect on the relation to visual servo control and discuss how controlled soft touch gives a route towards human-like dexterity in robots.



قيم البحث

اقرأ أيضاً

In this paper, we present an approach to tactile pose estimation from the first touch for known objects. First, we create an object-agnostic map from real tactile observations to contact shapes. Next, for a new object with known geometry, we learn a tailored perception model completely in simulation. To do so, we simulate the contact shapes that a dense set of object poses would produce on the sensor. Then, given a new contact shape obtained from the sensor output, we match it against the pre-computed set using the object-specific embedding learned purely in simulation using contrastive learning. This results in a perception model that can localize objects from a single tactile observation. It also allows reasoning over pose distributions and including additional pose constraints coming from other perception systems or multiple contacts. We provide quantitative results for four objects. Our approach provides high accuracy pose estimations from distinctive tactile observations while regressing pose distributions to account for those contact shapes that could result from different object poses. We further extend and test our approach in multi-contact scenarios where several tactile sensors are simultaneously in contact with the object. Website: http://mcube.mit.edu/research/tactile_loc_first_touch.html
This work investigates uncertainty-aware deep learning (DL) in tactile robotics based on a general framework introduced recently for robot vision. For a test scenario, we consider optical tactile sensing in combination with DL to estimate the edge po se as a feedback signal to servo around various 2D test objects. We demonstrate that uncertainty-aware DL can improve the pose estimation over deterministic DL methods. The system estimates the uncertainty associated with each prediction, which is used along with temporal coherency to improve the predictions via a Kalman filter, and hence improve the tactile servo control. The robot is able to robustly follow all of the presented contour shapes to reduce not only the error by a factor of two but also smooth the trajectory from the undesired noisy behaviour caused by previous deterministic networks. In our view, as the field of tactile robotics matures in its use of DL, the estimation of uncertainty will become a key component in the control of physically interactive tasks in complex environments.
This article illustrates the application of deep learning to robot touch by considering a basic yet fundamental capability: estimating the relative pose of part of an object in contact with a tactile sensor. We begin by surveying deep learning applie d to tactile robotics, focussing on optical tactile sensors, which help bridge from deep learning for vision to touch. We then show how deep learning can be used to train accurate pose models of 3D surfaces and edges that are insensitive to nuisance variables such as motion-dependent shear. This involves including representative motions as unlabelled perturbations of the training data and using Bayesian optimization of the network and training hyperparameters to find the most accurate models. Accurate estimation of pose from touch will enable robots to safely and precisely control their physical interactions, underlying a wide range of object exploration and manipulation tasks.
Estimation of tactile properties from vision, such as slipperiness or roughness, is important to effectively interact with the environment. These tactile properties help us decide which actions we should choose and how to perform them. E.g., we can d rive slower if we see that we have bad traction or grasp tighter if an item looks slippery. We believe that this ability also helps robots to enhance their understanding of the environment, and thus enables them to tailor their actions to the situation at hand. We therefore propose a model to estimate the degree of tactile properties from visual perception alone (e.g., the level of slipperiness or roughness). Our method extends a encoder-decoder network, in which the latent variables are visual and tactile features. In contrast to previous works, our method does not require manual labeling, but only RGB images and the corresponding tactile sensor data. All our data is collected with a webcam and uSkin tactile sensor mounted on the end-effector of a Sawyer robot, which strokes the surfaces of 25 different materials. We show that our model generalizes to materials not included in the training data by evaluating the feature space, indicating that it has learned to associate important tactile properties with images.
This paper describes an image based visual servoing (IBVS) system for a nonholonomic robot to achieve good trajectory following without real-time robot pose information and without a known visual map of the environment. We call it trajectory servoing . The critical component is a feature-based, indirect SLAM method to provide a pool of available features with estimated depth, so that they may be propagated forward in time to generate image feature trajectories for visual servoing. Short and long distance experiments show the benefits of trajectory servoing for navigating unknown areas without absolute positioning. Trajectory servoing is shown to be more accurate than pose-based feedback when both rely on the same underlying SLAM system.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا