ترغب بنشر مسار تعليمي؟ اضغط هنا

An Active Sense and Avoid System for Flying Robots in Dynamic Environments

72   0   0.0 ( 0 )
 نشر من قبل Gang Chen
 تاريخ النشر 2020
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

This paper investigates a novel active-sensing-based obstacle avoidance paradigm for flying robots in dynamic environments. Instead of fusing multiple sensors to enlarge the field of view (FOV), we introduce an alternative approach that utilizes a stereo camera with an independent rotational DOF to sense the obstacles actively. In particular, the sensing direction is planned heuristically by multiple objectives, including tracking dynamic obstacles, observing the heading direction, and exploring the previously unseen area. With the sensing result, a flight path is then planned based on real-time sampling and uncertainty-aware collision checking in the state space, which constitutes an active sense and avoid (ASAA) system. Experiments in both simulation and the real world demonstrate that this system can well cope with dynamic obstacles and abrupt goal direction changes. Since only one stereo camera is utilized, this system provides a low-cost and effective approach to overcome the FOV limitation in visual navigation.



قيم البحث

اقرأ أيضاً

This paper presents a search-based partial motion planner to generate dynamically feasible trajectories for car-like robots in highly dynamic environments. The planner searches for smooth, safe, and near-time-optimal trajectories by exploring a state graph built on motion primitives, which are generated by discretizing the time dimension and the control space. To enable fast online planning, we first propose an efficient path searching algorithm based on the aggregation and pruning of motion primitives. We then propose a fast collision checking algorithm that takes into account the motions of moving obstacles. The algorithm linearizes relative motions between the robot and obstacles and then checks collisions by comparing a point-line distance. Benefiting from the fast searching and collision checking algorithms, the planner can effectively and safely explore the state-time space to generate near-time-optimal solutions. The results through extensive experiments show that the proposed method can generate feasible trajectories within milliseconds while maintaining a higher success rate than up-to-date methods, which significantly demonstrates its advantages.
As autonomous robots increasingly become part of daily life, they will often encounter dynamic environments while only having limited information about their surroundings. Unfortunately, due to the possible presence of malicious dynamic actors, it is infeasible to develop an algorithm that can guarantee collision-free operation. Instead, one can attempt to design a control technique that guarantees the robot is not-at-fault in any collision. In the literature, making such guarantees in real time has been restricted to static environments or specific dynamic models. To ensure not-at-fault behavior, a robot must first correctly sense and predict the world around it within some sufficiently large sensor horizon (the prediction problem), then correctly control relative to the predictions (the control problem). This paper addresses the control problem by proposing Reachability-based Trajectory Design for Dynamic environments (RTD-D), which guarantees that a robot with an arbitrary nonlinear dynamic model correctly responds to predictions in arbitrary dynamic environments. RTD-D first computes a Forward Reachable Set (FRS) offline of the robot tracking parameterized desired trajectories that include fail-safe maneuvers. Then, for online receding-horizon planning, the method provides a way to discretize predictions of an arbitrary dynamic environment to enable real-time collision checking. The FRS is used to map these discretized predictions to trajectories that the robot can track while provably not-at-fault. One such trajectory is chosen at each iteration, or the robot executes the fail-safe maneuver from its previous trajectory which is guaranteed to be not at fault. RTD-D is shown to produce not-at-fault behavior over thousands of simulations and several real-world hardware demonstrations on two robots: a Segway, and a small electric vehicle.
Pruning is the art of cutting unwanted and unhealthy plant branches and is one of the difficult tasks in the field robotics. It becomes even more complex when the plant branches are moving. Moreover, the reproducibility of robot pruning skills is ano ther challenge to deal with due to the heterogeneous nature of vines in the vineyard. This research proposes a multi-modal framework to deal with the dynamic vines with the aim of sim2real skill transfer. The 3D models of vines are constructed in blender engine and rendered in simulated environment as a need for training the robot. The Natural Admittance Controller (NAC) is applied to deal with the dynamics of vines. It uses force feedback and compensates the friction effects while maintaining the passivity of system. The faster R-CNN is used to detect the spurs on the vines and then statistical pattern recognition algorithm using K-means clustering is applied to find the effective pruning points. The proposed framework is tested in simulated and real environments.
In this paper we present a simulation framework for the evaluation of the navigation and localization metrological performances of a robotic platform. The simulator, based on ROS (Robot Operating System) Gazebo, is targeted to a planetary-like resear ch vehicle which allows to test various perception and navigation approaches for specific environment conditions. The possibility of simulating arbitrary sensor setups comprising cameras, LiDARs (Light Detection and Ranging) and IMUs makes Gazebo an excellent resource for rapid prototyping. In this work we evaluate a variety of open-source visual and LiDAR SLAM (Simultaneous Localization and Mapping) algorithms in a simulated Martian environment. Datasets are captured by driving the rover and recording sensors outputs as well as the ground truth for a precise performance evaluation.
The theoretical ability of modular robots to reconfigure in response to complex tasks in a priori unknown environments has frequently been cited as an advantage and remains a major motivator for work in the field. We present a modular robot system ca pable of autonomously completing high-level tasks by reactively reconfiguring to meet the needs of a perceived, a priori unknown environment. The system integrates perception, high-level planning, and modular hardware, and is validated in three hardware demonstrations. Given a high-level task specification, a modular robot autonomously explores an unknown environment, decides when and how to reconfigure, and manipulates objects to complete its task. The system architecture balances distributed mechanical elements with centralized perception, planning, and control. By providing an example of how a modular robot system can be designed to leverage reactive reconfigurability in unknown environments, we have begun to lay the groundwork for modular self-reconfigurable robots to address tasks in the real world.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا