ﻻ يوجد ملخص باللغة العربية
In this paper, our aim is to highlight Tactile Perceptual Aliasing as a problem when using deep neural networks and other discriminative models. Perceptual aliasing will arise wherever a physical variable extracted from tactile data is subject to ambiguity between stimuli that are physically distinct. Here we address this problem using a probabilistic discriminative model implemented as a 5-component mixture density network comprised of a deep neural network that predicts the parameters of a Gaussian mixture model. We show that discriminative regression models such as deep neural networks and Gaussian process regression perform poorly on aliased data, only making accurate predictions when the sources of aliasing are removed. In contrast, the mixture density network identifies aliased data with improved prediction accuracy. The uncertain predictions of the model form patterns that are consistent with the various sources of perceptual ambiguity. In our view, perceptual aliasing will become an unavoidable issue for robot touch as the field progresses to training robots that act in uncertain and unstructured environments, such as with deep reinforcement learning.
Perceptual aliasing is one of the main causes of failure for Simultaneous Localization and Mapping (SLAM) systems operating in the wild. Perceptual aliasing is the phenomenon where different places generate a similar visual (or, in general, perceptua
Were interested in the problem of estimating object states from touch during manipulation under occlusions. In this work, we address the problem of estimating object poses from touch during planar pushing. Vision-based tactile sensors provide rich, l
Tactile sensing plays an important role in robotic perception and manipulation tasks. To overcome the real-world limitations of data collection, simulating tactile response in a virtual environment comes as a desirable direction of robotic research.
We consider the problems of learning forward models that map state to high-dimensional images and inverse models that map high-dimensional images to state in robotics. Specifically, we present a perceptual model for generating video frames from state
Estimation of tactile properties from vision, such as slipperiness or roughness, is important to effectively interact with the environment. These tactile properties help us decide which actions we should choose and how to perform them. E.g., we can d