ترغب بنشر مسار تعليمي؟ اضغط هنا

Teach-Repeat-Replan: A Complete and Robust System for Aggressive Flight in Complex Environments

289   0   0.0 ( 0 )
 نشر من قبل Fei Gao
 تاريخ النشر 2019
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

In this paper, we propose a complete and robust motion planning system for the aggressive flight of autonomous quadrotors. The proposed method is built upon on a classical teach-and-repeat framework, which is widely adopted in infrastructure inspection, aerial transportation, and search-and-rescue. For these applications, humans intention is essential to decide the topological structure of the flight trajectory of the drone. However, poor teaching trajectories and changing environments prevent a simple teach-and-repeat system from being applied flexibly and robustly. In this paper, instead of commanding the drone to precisely follow a teaching trajectory, we propose a method to automatically convert a human-piloted trajectory, which can be arbitrarily jerky, to a topologically equivalent one. The generated trajectory is guaranteed to be smooth, safe, and kinodynamically feasible, with a human preferable aggressiveness. Also, to avoid unmapped or dynamic obstacles during flights, a sliding-windowed local perception and re-planning method are introduced to our system, to generate safe local trajectories onboard. We name our system as teach-repeat-replan. It can capture users intention of a flight mission, convert an arbitrarily jerky teaching path to a smooth repeating trajectory, and generate safe local re-plans to avoid unmapped or moving obstacles. The proposed planning system is integrated into a complete autonomous quadrotor with global and local perception and localization sub-modules. Our system is validated by performing aggressive flights in challenging indoor/outdoor environments. We release all components in our quadrotor system as open-source ros-packages.



قيم البحث

اقرأ أيضاً

Industrial facilities often require periodic visual inspections of key installations. Examining these points of interest is time consuming, potentially hazardous or require special equipment to reach. MAVs are ideal platforms to automate this expensi ve and tedious task. In this work we present a novel system that enables a human operator to teach a visual inspection task to an autonomous aerial vehicle by simply demonstrating the task using a handheld device. To enable robust operation in confined, GPS-denied environments, the system employs the Google Tango visual-inertial mapping framework as the only source of pose estimates. In a first step the operator records the desired inspection path and defines the inspection points. The mapping framework then computes a feature-based localization map, which is shared with the robot. After take-off, the robot estimates its pose based on this map and plans a smooth trajectory through the way points defined by the operator. Furthermore, the system is able to track the poses of other robots or the operator, localized in the same map, and follow them in real-time while keeping a safe distance.
In dynamic and cramped industrial environments, achieving reliable Visual Teach and Repeat (VT&R) with a single-camera is challenging. In this work, we develop a robust method for non-synchronized multi-camera VT&R. Our contribution are expected Came ra Performance Models (CPM) which evaluate the camera streams from the teach step to determine the most informative one for localization during the repeat step. By actively selecting the most suitable camera for localization, we are able to successfully complete missions when one of the cameras is occluded, faces into feature poor locations or if the environment has changed. Furthermore, we explore the specific challenges of achieving VT&R on a dynamic quadruped robot, ANYmal. The camera does not follow a linear path (due to the walking gait and holonomicity) such that precise path-following cannot be achieved. Our experiments feature forward and backward facing stereo cameras showing VT&R performance in cluttered indoor and outdoor scenarios. We compared the trajectories the robot executed during the repeat steps demonstrating typical tracking precision of less than 10cm on average. With a view towards omni-directional localization, we show how the approach generalizes to four cameras in simulation. Video: https://youtu.be/iAY0lyjAnqY
This paper proposes a systematic solution that uses an unmanned aerial vehicle (UAV) to aggressively and safely track an agile target. The solution properly handles the challenging situations where the intent of the target and the dense environments are unknown to the UAV. Our work is divided into two parts: target motion prediction and tracking trajectory planning. The target motion prediction method utilizes target observations to reliably predict the future motion of the target considering dynamic constraints. The tracking trajectory planner follows the traditional hierarchical workflow.A target informed kinodynamic searching method is adopted as the front-end, which heuristically searches for a safe tracking trajectory. The back-end optimizer then refines it into a spatial-temporal optimal and collision-free trajectory. The proposed solution is integrated into an onboard quadrotor system. We fully test the system in challenging real-world tracking missions.Moreover, benchmark comparisons validate that the proposed method surpasses the cutting-edge methods on time efficiency and tracking effectiveness.
We present an open-source system for Micro-Aerial Vehicle autonomous navigation from vision-based sensing. Our system focuses on dense mapping, safe local planning, and global trajectory generation, especially when using narrow field of view sensors in very cluttered environments. In addition, details about other necessary parts of the system and special considerations for applications in real-world scenarios are presented. We focus our experiments on evaluating global planning, path smoothing, and local planning methods on real maps made on MAVs in realistic search and rescue and industrial inspection scenarios. We also perform thousands of simulations in cluttered synthetic environments, and finally validate the complete system in real-world experiments.
Redundant navigation systems are critical for safe operation of UAVs in high-risk environments. Since most commercial UAVs almost wholly rely on GPS, jamming, interference and multi-pathing are real concerns that usually limit their operations to low -risk environments and Visual Line-Of-Sight. This paper presents a vision-based route-following system for the autonomous, safe return of UAVs under primary navigation failure such as GPS jamming. Using a Visual Teach & Repeat framework to build a visual map of the environment during an outbound flight, we show the autonomous return of the UAV by visually localising the live view to this map when a simulated GPS failure occurs, controlling the vehicle to follow the safe outbound path back to the launch point. Using gimbal-stabilised stereo vision alone, without reliance on external infrastructure or inertial sensing, visual odometry and localisation are achieved at altitudes of 5-25 m and flight speeds up to 55 km/h. We examine the performance of the visual localisation algorithm under a variety of conditions and also demonstrate closed-loop autonomy along a complicated 450 m path.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا