Do you want to publish a course? Click here

Omni-swarm: A Decentralized Omnidirectional Visual-Inertial-UWB State Estimation System for Aerial Swarm

128   0   0.0 ( 0 )
 Added by Xu Hao
 Publication date 2021
and research's language is English




Ask ChatGPT about the research

The decentralized state estimation is one of the most fundamental components for autonomous aerial swarm systems in GPS-denied areas, which still remains a highly challenging research topic. To address this research niche, the Omni-swarm, a decentralized omnidirectional visual-inertial-UWB state estimation system for the aerial swarm is proposed in this paper. In order to solve the issues of observability, complicated initialization, insufficient accuracy and lack of global consistency, we introduce an omnidirectional perception system as the front-end of the Omni-swarm, consisting of omnidirectional sensors, which includes stereo fisheye cameras and ultra-wideband (UWB) sensors, and algorithms, which includes fisheye visual inertial odometry (VIO), multi-drone map-based localization and visual object detector. A graph-based optimization and forward propagation working as the back-end of the Omni-swarm to fuse the measurements from the front-end. According to the experiment result, the proposed decentralized state estimation method on the swarm system achieves centimeter-level relative state estimation accuracy while ensuring global consistency. Moreover, supported by the Omni-swarm, inter-drone collision avoidance can be accomplished in a whole decentralized scheme without any external device, demonstrating the potential of Omni-swarm to be the foundation of autonomous aerial swarm flights in different scenarios.



rate research

Read More

133 - Qin Shi , Xiaowei Cui , Wei Li 2019
Navigation applications relying on the Global Navigation Satellite System (GNSS) are limited in indoor environments and GNSS-denied outdoor terrains such as dense urban or forests. In this paper, we present a novel accurate, robust and low-cost GNSS-independent navigation system, which is composed of a monocular camera and Ultra-wideband (UWB) transceivers. Visual techniques have gained excellent results when computing the incremental motion of the sensor, and UWB methods have proved to provide promising localization accuracy due to the high time resolution of the UWB ranging signals. However, the monocular visual techniques with scale ambiguity are not suitable for applications requiring metric results, and UWB methods assume that the positions of the UWB transceiver anchor are pre-calibrated and known, thus precluding their application in unknown and challenging environments. To this end, we advocate leveraging the monocular camera and UWB to create a map of visual features and UWB anchors. We propose a visual-UWB Simultaneous Localization and Mapping (SLAM) algorithm which tightly combines visual and UWB measurements to form a joint non-linear optimization problem on Lie-Manifold. The 6 Degrees of Freedom (DoF) state of the vehicles and the map are estimated by minimizing the UWB ranging errors and landmark reprojection errors. Our navigation system starts with an exploratory task which performs the real-time visual-UWB SLAM to obtain the global map, then the navigation task by reusing this global map. The tasks can be performed by different vehicles in terms of equipped sensors and payload capability in a heterogeneous team. We validate our system on the public datasets, achieving typical centimeter accuracy and 0.1% scale error.
Industrial facilities often require periodic visual inspections of key installations. Examining these points of interest is time consuming, potentially hazardous or require special equipment to reach. MAVs are ideal platforms to automate this expensive and tedious task. In this work we present a novel system that enables a human operator to teach a visual inspection task to an autonomous aerial vehicle by simply demonstrating the task using a handheld device. To enable robust operation in confined, GPS-denied environments, the system employs the Google Tango visual-inertial mapping framework as the only source of pose estimates. In a first step the operator records the desired inspection path and defines the inspection points. The mapping framework then computes a feature-based localization map, which is shared with the robot. After take-off, the robot estimates its pose based on this map and plans a smooth trajectory through the way points defined by the operator. Furthermore, the system is able to track the poses of other robots or the operator, localized in the same map, and follow them in real-time while keeping a safe distance.
Microrobotics has the potential to revolutionize many applications including targeted material delivery, assembly, and surgery. The same properties that promise breakthrough solutions---small size and large populations---present unique challenges for controlling motion. Robotic manipulation usually assumes intelligent agents, not particle systems manipulated by a global signal. To identify the key parameters for particle manipulation, we used a collection of online games where players steer swarms of up to 500 particles to complete manipulation challenges. We recorded statistics from over ten thousand players. Inspired by techniques where human operators performed well, we investigate controllers that use only the mean and variance of the swarm. We prove the mean position is controllable and provide conditions under which variance is controllable. We next derive automatic controllers for these and a hysteresis-based switching control to regulate the first two moments of the particle distribution. Finally, we employ these controllers as primitives for an object manipulation task and implement all controllers on 100 kilobots controlled by the direction of a global light source.
In this paper, we present Neural-Swarm, a nonlinear decentralized stable controller for close-proximity flight of multirotor swarms. Close-proximity control is challenging due to the complex aerodynamic interaction effects between multirotors, such as downwash from higher vehicles to lower ones. Conventional methods often fail to properly capture these interaction effects, resulting in controllers that must maintain large safety distances between vehicles, and thus are not capable of close-proximity flight. Our approach combines a nominal dynamics model with a regularized permutation-invariant Deep Neural Network (DNN) that accurately learns the high-order multi-vehicle interactions. We design a stable nonlinear tracking controller using the learned model. Experimental results demonstrate that the proposed controller significantly outperforms a baseline nonlinear tracking controller with up to four times smaller worst-case height tracking errors. We also empirically demonstrate the ability of our learned model to generalize to larger swarm sizes.
Among the available solutions for drone swarm simulations, we identified a gap in simulation frameworks that allow easy algorithms prototyping, tuning, debugging and performance analysis, and do not require the user to interface with multiple programming languages. We present SwarmLab, a software entirely written in Matlab, that aims at the creation of standardized processes and metrics to quantify the performance and robustness of swarm algorithms, and in particular, it focuses on drones. We showcase the functionalities of SwarmLab by comparing two state-of-the-art algorithms for the navigation of aerial swarms in cluttered environments, Olfati-Sabers and Vasarhelyis. We analyze the variability of the inter-agent distances and agents speeds during flight. We also study some of the performance metrics presented, i.e. order, inter and extra-agent safety, union, and connectivity. While Olfati-Sabers approach results in a faster crossing of the obstacle field, Vasarhelyis approach allows the agents to fly smoother trajectories, without oscillations. We believe that SwarmLab is relevant for both the biological and robotics research communities, and for education, since it allows fast algorithm development, the automatic collection of simulated data, the systematic analysis of swarming behaviors with performance metrics inherited from the state of the art.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا