ترغب بنشر مسار تعليمي؟ اضغط هنا

An Integrated System for Perception-Driven Autonomy with Modular Robots

169   0   0.0 ( 0 )
 نشر من قبل Tarik Tosun
 تاريخ النشر 2017
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

The theoretical ability of modular robots to reconfigure in response to complex tasks in a priori unknown environments has frequently been cited as an advantage and remains a major motivator for work in the field. We present a modular robot system capable of autonomously completing high-level tasks by reactively reconfiguring to meet the needs of a perceived, a priori unknown environment. The system integrates perception, high-level planning, and modular hardware, and is validated in three hardware demonstrations. Given a high-level task specification, a modular robot autonomously explores an unknown environment, decides when and how to reconfigure, and manipulates objects to complete its task. The system architecture balances distributed mechanical elements with centralized perception, planning, and control. By providing an example of how a modular robot system can be designed to leverage reactive reconfigurability in unknown environments, we have begun to lay the groundwork for modular self-reconfigurable robots to address tasks in the real world.



قيم البحث

اقرأ أيضاً

We present a system enabling a modular robot to autonomously build structures in order to accomplish high-level tasks. Building structures allows the robot to surmount large obstacles, expanding the set of tasks it can perform. This addresses a commo n weakness of modular robot systems, which often struggle to traverse large obstacles. This paper presents the hardware, perception, and planning tools that comprise our system. An environment characterization algorithm identifies features in the environment that can be augmented to create a path between two disconnected regions of the environment. Specially-designed building blocks enable the robot to create structures that can augment the environment to make obstacles traversable. A high-level planner reasons about the task, robot locomotion capabilities, and environment to decide if and where to augment the environment in order to perform the desired task. We validate our system in hardware experiments
An integrated software-based solution for a modular and self-independent networked robot is introduced. The wirelessly operatable robot has been developed mainly for autonomous monitoring works with full control over web. The integrated software solu tion covers three components : a) the digital signal processing unit for data retrieval and monitoring system; b) the externally executable codes for control system; and c) the web programming for interfacing the end-users with the robot. It is argued that this integrated software-based approach is crucial to realize a flexible, modular and low development cost mobile monitoring apparatus.
139 - D. S. Drew , M. Devlin , E. Hawkes 2021
Modular soft robots combine the strengths of two traditionally separate areas of robotics. As modular robots, they can show robustness to individual failure and reconfigurability; as soft robots, they can deform and undergo large shape changes in ord er to adapt to their environment, and have inherent human safety. However, for sensing and communication these robots also combine the challenges of both: they require solutions that are scalable (low cost and complexity) and efficient (low power) to enable collectives of large numbers of robots, and these solutions must also be able to interface with the high extension ratio elastic bodies of soft robots. In this work, we seek to address these challenges using acoustic signals produced by piezoelectric surface transducers that are cheap, simple, and low power, and that not only integrate with but also leverage the elastic robot skins for signal transmission. Importantly, to further increase scalability, the transducers exhibit multi-functionality made possible by a relatively flat frequency response across the audible and ultrasonic ranges. With minimal hardware, they enable directional contact-based communication, audible-range communication at a distance, and exteroceptive sensing. We demonstrate a subset of the decentralized collective behaviors these functions make possible with multi-robot hardware implementations. The use of acoustic waves in this domain is shown to provide distinct advantages over existing solutions.
Project AutoVision aims to develop localization and 3D scene perception capabilities for a self-driving vehicle. Such capabilities will enable autonomous navigation in urban and rural environments, in day and night, and with cameras as the only exter oceptive sensors. The sensor suite employs many cameras for both 360-degree coverage and accurate multi-view stereo; the use of low-cost cameras keeps the cost of this sensor suite to a minimum. In addition, the project seeks to extend the operating envelope to include GNSS-less conditions which are typical for environments with tall buildings, foliage, and tunnels. Emphasis is placed on leveraging multi-view geometry and deep learning to enable the vehicle to localize and perceive in 3D space. This paper presents an overview of the project, and describes the sensor suite and current progress in the areas of calibration, localization, and perception.
This paper investigates a novel active-sensing-based obstacle avoidance paradigm for flying robots in dynamic environments. Instead of fusing multiple sensors to enlarge the field of view (FOV), we introduce an alternative approach that utilizes a st ereo camera with an independent rotational DOF to sense the obstacles actively. In particular, the sensing direction is planned heuristically by multiple objectives, including tracking dynamic obstacles, observing the heading direction, and exploring the previously unseen area. With the sensing result, a flight path is then planned based on real-time sampling and uncertainty-aware collision checking in the state space, which constitutes an active sense and avoid (ASAA) system. Experiments in both simulation and the real world demonstrate that this system can well cope with dynamic obstacles and abrupt goal direction changes. Since only one stereo camera is utilized, this system provides a low-cost and effective approach to overcome the FOV limitation in visual navigation.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا