ترغب بنشر مسار تعليمي؟ اضغط هنا

Uncertainty-aware deep learning for robot touch: Application to Bayesian tactile servo control

360   0   0.0 ( 0 )
 نشر من قبل Nathan Lepora
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

This work investigates uncertainty-aware deep learning (DL) in tactile robotics based on a general framework introduced recently for robot vision. For a test scenario, we consider optical tactile sensing in combination with DL to estimate the edge pose as a feedback signal to servo around various 2D test objects. We demonstrate that uncertainty-aware DL can improve the pose estimation over deterministic DL methods. The system estimates the uncertainty associated with each prediction, which is used along with temporal coherency to improve the predictions via a Kalman filter, and hence improve the tactile servo control. The robot is able to robustly follow all of the presented contour shapes to reduce not only the error by a factor of two but also smooth the trajectory from the undesired noisy behaviour caused by previous deterministic networks. In our view, as the field of tactile robotics matures in its use of DL, the estimation of uncertainty will become a key component in the control of physically interactive tasks in complex environments.

قيم البحث

اقرأ أيضاً

This article illustrates the application of deep learning to robot touch by considering a basic yet fundamental capability: estimating the relative pose of part of an object in contact with a tactile sensor. We begin by surveying deep learning applie d to tactile robotics, focussing on optical tactile sensors, which help bridge from deep learning for vision to touch. We then show how deep learning can be used to train accurate pose models of 3D surfaces and edges that are insensitive to nuisance variables such as motion-dependent shear. This involves including representative motions as unlabelled perturbations of the training data and using Bayesian optimization of the network and training hyperparameters to find the most accurate models. Accurate estimation of pose from touch will enable robots to safely and precisely control their physical interactions, underlying a wide range of object exploration and manipulation tasks.
This article describes a new way of controlling robots using soft tactile sensors: pose-based tactile servo (PBTS) control. The basic idea is to embed a tactile perception model for estimating the sensor pose within a servo control loop that is appli ed to local object features such as edges and surfaces. PBTS control is implemented with a soft curved optical tactile sensor (the BRL TacTip) using a convolutional neural network trained to be insensitive to shear. In consequence, robust and accurate controlled motion over various complex 3D objects is attained. First, we review tactile servoing and its relation to visual servoing, before formalising PBTS control. Then, we assess tactile servoing over a range of regular and irregular objects. Finally, we reflect on the relation to visual servo control and discuss how controlled soft touch gives a route towards human-like dexterity in robots.
Mobility in an effective and socially-compliant manner is an essential yet challenging task for robots operating in crowded spaces. Recent works have shown the power of deep reinforcement learning techniques to learn socially cooperative policies. Ho wever, their cooperation ability deteriorates as the crowd grows since they typically relax the problem as a one-way Human-Robot interaction problem. In this work, we want to go beyond first-order Human-Robot interaction and more explicitly model Crowd-Robot Interaction (CRI). We propose to (i) rethink pairwise interactions with a self-attention mechanism, and (ii) jointly model Human-Robot as well as Human-Human interactions in the deep reinforcement learning framework. Our model captures the Human-Human interactions occurring in dense crowds that indirectly affects the robots anticipation capability. Our proposed attentive pooling mechanism learns the collective importance of neighboring humans with respect to their future states. Various experiments demonstrate that our model can anticipate human dynamics and navigate in crowds with time efficiency, outperforming state-of-the-art methods.
Artificial touch would seem well-suited for Reinforcement Learning (RL), since both paradigms rely on interaction with an environment. Here we propose a new environment and set of tasks to encourage development of tactile reinforcement learning: lear ning to type on a braille keyboard. Four tasks are proposed, progressing in difficulty from arrow to alphabet keys and from discrete to continuous actions. A simulated counterpart is also constructed by sampling tactile data from the physical environment. Using state-of-the-art deep RL algorithms, we show that all of these tasks can be successfully learnt in simulation, and 3 out of 4 tasks can be learned on the real robot. A lack of sample efficiency currently makes the continuous alphabet task impractical on the robot. To the best of our knowledge, this work presents the first demonstration of successfully training deep RL agents in the real world using observations that exclusively consist of tactile images. To aid future research utilising this environment, the code for this project has been released along with designs of the braille keycaps for 3D printing and a guide for recreating the experiments. A brief video summary is also available at https://youtu.be/eNylCA2uE_E.
Current methods for estimating force from tactile sensor signals are either inaccurate analytic models or task-specific learned models. In this paper, we explore learning a robust model that maps tactile sensor signals to force. We specifically explo re learning a mapping for the SynTouch BioTac sensor via neural networks. We propose a voxelized input feature layer for spatial signals and leverage information about the sensor surface to regularize the loss function. To learn a robust tactile force model that transfers across tasks, we generate ground truth data from three different sources: (1) the BioTac rigidly mounted to a force torque~(FT) sensor, (2) a robot interacting with a ball rigidly attached to the same FT sensor, and (3) through force inference on a planar pushing task by formalizing the mechanics as a system of particles and optimizing over the object motion. A total of 140k samples were collected from the three sources. We achieve a median angular accuracy of 3.5 degrees in predicting force direction (66% improvement over the current state of the art) and a median magnitude accuracy of 0.06 N (93% improvement) on a test dataset. Additionally, we evaluate the learned force model in a force feedback grasp controller performing object lifting and gentle placement. Our results can be found on https://sites.google.com/view/tactile-force.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا