ترغب بنشر مسار تعليمي؟ اضغط هنا

Distance estimation with efference copies and optical flow maneuvers: a stability-based strategy

330   0   0.0 ( 0 )
 نشر من قبل Guido De Croon
 تاريخ النشر 2015
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English
 تأليف G.C.H.E. de Croon




اسأل ChatGPT حول البحث

The visual cue of optical flow plays a major role in the navigation of flying insects, and is increasingly studied for use by small flying robots as well. A major problem is that successful optical flow control seems to require distance estimates, while optical flow is known to provide only the ratio of velocity to distance. In this article, a novel, stability-based strategy is proposed to estimate distances with monocular optical flow and knowledge of the control inputs (efference copies). It is shown analytically that given a fixed control gain, the stability of a constant divergence control loop only depends on the distance to the approached surface. At close distances, the control loop first starts to exhibit self-induced oscillations, eventually leading to instability. The proposed stability-based strategy for estimating distances has two major attractive characteristics. First, self-induced oscillations are easy for the robot to detect and are hardly influenced by wind. Second, the distance can be estimated during a zero divergence maneuver, i.e., around hover. The stability-based strategy is implemented and tested both in simulation and with a Parrot AR drone 2.0. It is shown that it can be used to: (1) trigger a final approach response during a constant divergence landing with fixed gain, (2) estimate the distance in hover, and (3) estimate distances during an entire landing if the robot uses adaptive gain control to continuously stay on the edge of oscillation.

قيم البحث

اقرأ أيضاً

The rising popularity of driver-less cars has led to the research and development in the field of autonomous racing, and overtaking in autonomous racing is a challenging task. Vehicles have to detect and operate at the limits of dynamic handling and decisions in the car have to be made at high speeds and high acceleration. One of the most crucial parts in autonomous racing is path planning and decision making for an overtaking maneuver with a dynamic opponent vehicle. In this paper we present the evaluation of a track based offline policy learning approach for autonomous racing. We define specific track portions and conduct offline experiments to evaluate the probability of an overtaking maneuver based on speed and position of the ego vehicle. Based on these experiments we can define overtaking probability distributions for each of the track portions. Further, we propose a switching MPCC controller setup for incorporating the learnt policies to achieve a higher rate of overtaking maneuvers. By exhaustive simulations, we show that our proposed algorithm is able to increase the number of overtakes at different track portions.
Motion planning for vehicles under the influence of flow fields can benefit from the idea of streamline-based planning, which exploits ideas from fluid dynamics to achieve computational efficiency. Important to such planners is an efficient means of computing the travel distance and direction between two points in free space, but this is difficult to achieve in strong incompressible flows such as ocean currents. We propose two useful distance functions in analytical form that combine Euclidean distance with values of the stream function associated with a flow field, and with an estimation of the strength of the opposing flow between two points. Further, we propose steering heuristics that are useful for steering towards a sampled point. We evaluate these ideas by integrating them with RRT* and comparing the algorithms performance with state-of-the-art methods in an artificial flow field and in actual ocean prediction data in the region of the dominant East Australian Current between Sydney and Brisbane. Results demonstrate the methods computational efficiency and ability to find high-quality paths outperforming state-of-the-art methods, and show promise for practical use with autonomous marine robots.
75 - Hengli Wang , Rui Fan , Ming Liu 2020
The interpretation of ego motion and scene change is a fundamental task for mobile robots. Optical flow information can be employed to estimate motion in the surroundings. Recently, unsupervised optical flow estimation has become a research hotspot. However, unsupervised approaches are often easy to be unreliable on partially occluded or texture-less regions. To deal with this problem, we propose CoT-AMFlow in this paper, an unsupervised optical flow estimation approach. In terms of the network architecture, we develop an adaptive modulation network that employs two novel module types, flow modulation modules (FMMs) and cost volume modulation modules (CMMs), to remove outliers in challenging regions. As for the training paradigm, we adopt a co-teaching strategy, where two networks simultaneously teach each other about challenging regions to further improve accuracy. Experimental results on the MPI Sintel, KITTI Flow and Middlebury Flow benchmarks demonstrate that our CoT-AMFlow outperforms all other state-of-the-art unsupervised approaches, while still running in real time. Our project page is available at https://sites.google.com/view/cot-amflow.
Dynamic environments are challenging for visual SLAM since the moving objects occlude the static environment features and lead to wrong camera motion estimation. In this paper, we present a novel dense RGB-D SLAM solution that simultaneously accompli shes the dynamic/static segmentation and camera ego-motion estimation as well as the static background reconstructions. Our novelty is using optical flow residuals to highlight the dynamic semantics in the RGB-D point clouds and provide more accurate and efficient dynamic/static segmentation for camera tracking and background reconstruction. The dense reconstruction results on public datasets and real dynamic scenes indicate that the proposed approach achieved accurate and efficient performances in both dynamic and static environments compared to state-of-the-art approaches.
200 - S.Li , M.M.O.I. Ozo , C. De Wagter 2018
Drone racing is becoming a popular sport where human pilots have to control their drones to fly at high speed through complex environments and pass a number of gates in a pre-defined sequence. In this paper, we develop an autonomous system for drones to race fully autonomously using only onboard resources. Instead of commonly used visual navigation methods, such as simultaneous localization and mapping and visual inertial odometry, which are computationally expensive for micro aerial vehicles (MAVs), we developed the highly efficient snake gate detection algorithm for visual navigation, which can detect the gate at 20HZ on a Parrot Bebop drone. Then, with the gate detection result, we developed a robust pose estimation algorithm which has better tolerance to detection noise than a state-of-the-art perspective-n-point method. During the race, sometimes the gates are not in the drones field of view. For this case, a state prediction-based feed-forward control strategy is developed to steer the drone to fly to the next gate. Experiments show that the drone can fly a half-circle with 1.5m radius within 2 seconds with only 30cm error at the end of the circle without any position feedback. Finally, the whole system is tested in a complex environment (a showroom in the faculty of Aerospace Engineering, TU Delft). The result shows that the drone can complete the track of 15 gates with a speed of 1.5m/s which is faster than the speeds exhibited at the 2016 and 2017 IROS autonomous drone races.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا