ترغب بنشر مسار تعليمي؟ اضغط هنا

A Multi-Chamber Smart Suction Cup for Adaptive Gripping and Haptic Exploration

164   0   0.0 ( 0 )
 نشر من قبل Tae Myung Huh
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

We present a novel robot end-effector for gripping and haptic exploration. Tactile sensing through suction flow monitoring is applied to a new suction cup design that contains multiple chambers for air flow. Each chamber connects with its own remote pressure transducer, which enables both absolute and differential pressure measures between chambers. By changing the overall vacuum applied to this smart suction cup, it can perform different functions such as gentle haptic exploration (low pressure) and monitoring breaks in the seal during strong astrictive gripping (high pressure). Haptic exploration of surfaces through sliding and palpation can guide the selection of suction grasp locations and help to identify the local surface geometry. During suction gripping, this design localizes breaks in the suction seal between four quadrants with up to 97% accuracy and detects breaks in the suction seal early enough to avoid total grasp failure.

قيم البحث

اقرأ أيضاً

Recently, suction-based robotic systems with microscopic features or active suction components have been proposed to grip rough and irregular surfaces. However, sophisticated fabrication methods or complex control systems are required for such system s, and robust attachment to rough real-world surfaces still remains a grand challenge. Here, we propose a fully soft robotic gripper, where a flat elastic membrane is used to conform and contact parts or surfaces well, where an internal negative pressure exerted on the air-sealed membrane induces the suction-based gripping. 3D printing in combination with soft molding techniques enable the fabrication of the soft gripper. Robust attachment to complex 3D and rough surfaces is enabled by the surface-conformable soft flat membrane, which generates strong and robust suction at the contact interface. Such robust attachment to rough and irregular surfaces enables manipulation of a broad range of real-world objects, such as an egg, lime, and foiled package, without any physical damage. Compared to the conventional suction cup designs, the proposed suction gripper design shows a four-fold increase in gripping performance on rough surfaces. Furthermore, the structural and material simplicity of the proposed gripper architecture facilitates its system-level integration with other soft robotic peripherals, which can enable broader impact in diverse fields, such as digital manufacturing, robotic manipulation, and medical gripping applications.
Haptic exploration is a key skill for both robots and humans to discriminate and handle unknown objects or to recognize familiar objects. Its active nature is evident in humans who from early on reliably acquire sophisticated sensory-motor capabiliti es for active exploratory touch and directed manual exploration that associates surfaces and object properties with their spatial locations. This is in stark contrast to robotics. In this field, the relative lack of good real-world interaction models - along with very restricted sensors and a scarcity of suitable training data to leverage machine learning methods - has so far rendered haptic exploration a largely underdeveloped skill. In the present work, we connect recent advances in recurrent models of visual attention with previous insights about the organisation of human haptic search behavior, exploratory procedures and haptic glances for a novel architecture that learns a generative model of haptic exploration in a simulated three-dimensional environment. The proposed algorithm simultaneously optimizes main perception-action loop components: feature extraction, integration of features over time, and the control strategy, while continuously acquiring data online. We perform a multi-module neural network training, including a feature extractor and a recurrent neural network module aiding pose control for storing and combining sequential sensory data. The resulting haptic meta-controller for the rigid $16 times 16$ tactile sensor array moving in a physics-driven simulation environment, called the Haptic Attention Model, performs a sequence of haptic glances, and outputs corresponding force measurements. The resulting method has been successfully tested with four different objects. It achieved results close to $100 %$ while performing object contour exploration that has been optimized for its own sensor morphology.
Drone teleoperation is usually accomplished using remote radio controllers, devices that can be hard to master for inexperienced users. Moreover, the limited amount of information fed back to the user about the robots state, often limited to vision, can represent a bottleneck for operation in several conditions. In this work, we present a wearable interface for drone teleoperation and its evaluation through a user study. The two main features of the proposed system are a data glove to allow the user to control the drone trajectory by hand motion and a haptic system used to augment their awareness of the environment surrounding the robot. This interface can be employed for the operation of robotic systems in line of sight (LoS) by inexperienced operators and allows them to safely perform tasks common in inspection and search-and-rescue missions such as approaching walls and crossing narrow passages with limited visibility conditions. In addition to the design and implementation of the wearable interface, we performed a systematic study to assess the effectiveness of the system through three user studies (n = 36) to evaluate the users learning path and their ability to perform tasks with limited visibility. We validated our ideas in both a simulated and a real-world environment. Our results demonstrate that the proposed system can improve teleoperation performance in different cases compared to standard remote controllers, making it a viable alternative to standard Human-Robot Interfaces.
In this report for the Nasa NIAC Phase I study, we present a mission architecture and a robotic platform, the Shapeshifter, that allow multi-domain and redundant mobility on Saturns moon Titan, and potentially other bodies with atmospheres. The Shape shifter is a collection of simple and affordable robotic units, called Cobots, comparable to personal palm-size quadcopters. By attaching and detaching with each other, multiple Cobots can shape-shift into novel structures, capable of (a) rolling on the surface, to increase the traverse range, (b) flying in a flight array formation, and (c) swimming on or under liquid. A ground station complements the robotic platform, hosting science instrumentation and providing power to recharge the batteries of the Cobots. Our Phase I study had the objective of providing an initial assessment of the feasibility of the proposed robotic platform architecture, and in particular (a) to characterize the expected science return of a mission to the Sotra-Patera region on Titan; (b) to verify the mechanical and algorithmic feasibility of building a multi-agent platform capable of flying, docking, rolling and un-docking; (c) to evaluate the increased range and efficiency of rolling on Titan w.r.t to flying; (d) to define a case-study of a mission for the exploration of the cryovolcano Sotra-Patera on Titan, whose expected variety of geological features challenges conventional mobility platforms.
Robotic exploration under uncertain environments is challenging when optical information is not available. In this paper, we propose an autonomous solution of exploring an unknown task space based on tactile sensing alone. We first designed a whisker sensor based on MEMS barometer devices. This sensor can acquire contact information by interacting with the environment non-intrusively. This sensor is accompanied by a planning technique to generate exploration trajectories by using mere tactile perception. This technique relies on a hybrid policy for tactile exploration, which includes a proactive informative path planner for object searching, and a reactive Hopf oscillator for contour tracing. Results indicate that the hybrid exploration policy can increase the efficiency of object discovery. Last, scene understanding was facilitated by segmenting objects and classification. A classifier was developed to recognize the object categories based on the geometric features collected by the whisker sensor. Such an approach demonstrates the whisker sensor, together with the tactile intelligence, can provide sufficiently discriminative features to distinguish objects.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا