No Arabic abstract
Indoor localization for autonomous micro aerial vehicles (MAVs) requires specific localization techniques, since the Global Positioning System (GPS) is usually not available. We present an efficient onboard computer vision approach that estimates 2D positions of an MAV in real-time. This global localization system does not suffer from error accumulation over time and uses a $k$-Nearest Neighbors ($k$-NN) algorithm to predict positions based on textons---small characteristic image patches that capture the texture of an environment. A particle filter aggregates the estimates and resolves positional ambiguities. To predict the performance of the approach in a given setting, we developed an evaluation technique that compares environments and identifies critical areas within them. We conducted flight tests to demonstrate the applicability of our approach. The algorithm has a localization accuracy of approximately 0.6 m on a 5 m$times$5 m area at a runtime of 32 ms on board of an MAV. Based on random sampling, its computational effort is scalable to different platforms, trading off speed and accuracy.
In this work, we address the estimation, planning, control and mapping problems to allow a small quadrotor to autonomously inspect the interior of hazardous damaged nuclear sites. These algorithms run onboard on a computationally limited CPU. We investigate the effect of varying illumination on the system performance. To the best of our knowledge, this is the first fully autonomous system of this size and scale applied to inspect the interior of a full scale mock-up of a Primary Containment Vessel (PCV). The proposed solution opens up new ways to inspect nuclear reactors and to support nuclear decommissioning, which is well known to be a dangerous, long and tedious process. Experimental results with varying illumination conditions show the ability to navigate a full scale mock-up PCV pedestal and create a map of the environment, while concurrently avoiding obstacles.
There exists an increasing demand for a flexible and computationally efficient controller for micro aerial vehicles (MAVs) due to a high degree of environmental perturbations. In this work, an evolving neuro-fuzzy controller, namely Parsimonious Controller (PAC) is proposed. It features fewer network parameters than conventional approaches due to the absence of rule premise parameters. PAC is built upon a recently developed evolving neuro-fuzzy system known as parsimonious learning machine (PALM) and adopts new rule growing and pruning modules derived from the approximation of bias and variance. These rule adaptation methods have no reliance on user-defined thresholds, thereby increasing the PACs autonomy for real-time deployment. PAC adapts the consequent parameters with the sliding mode control (SMC) theory in the single-pass fashion. The boundedness and convergence of the closed-loop control systems tracking error and the controllers consequent parameters are confirmed by utilizing the LaSalle-Yoshizawa theorem. Lastly, the controllers efficacy is evaluated by observing various trajectory tracking performance from a bio-inspired flapping-wing micro aerial vehicle (BI-FWMAV) and a rotary wing micro aerial vehicle called hexacopter. Furthermore, it is compared to three distinctive controllers. Our PAC outperforms the linear PID controller and feed-forward neural network (FFNN) based nonlinear adaptive controller. Compared to its predecessor, G-controller, the tracking accuracy is comparable, but the PAC incurs significantly fewer parameters to attain similar or better performance than the G-controller.
The capabilities of autonomous flight with unmanned aerial vehicles (UAVs) have significantly increased in recent times. However, basic problems such as fast and robust geo-localization in GPS-denied environments still remain unsolved. Existing research has primarily concentrated on improving the accuracy of localization at the cost of long and varying computation time in various situations, which often necessitates the use of powerful ground station machines. In order to make image-based geo-localization online and pragmatic for lightweight embedded systems on UAVs, we propose a framework that is reliable in changing scenes, flexible about computing resource allocation and adaptable to common camera placements. The framework is comprised of two stages: offline database preparation and online inference. At the first stage, color images and depth maps are rendered as seen from potential vehicle poses quantized over the satellite and topography maps of anticipated flying areas. A database is then populated with the global and local descriptors of the rendered images. At the second stage, for each captured real-world query image, top global matches are retrieved from the database and the vehicle pose is further refined via local descriptor matching. We present field experiments of image-based localization on two different UAV platforms to validate our results.
Unmanned Aerial Vehicles (UAVs) are getting closer to becoming ubiquitous in everyday life. Among them, Micro Aerial Vehicles (MAVs) have seen an outburst of attention recently, specifically in the area with a demand for autonomy. A key challenge standing in the way of making MAVs autonomous is that researchers lack the comprehensive understanding of how performance, power, and computational bottlenecks affect MAV applications. MAVs must operate under a stringent power budget, which severely limits their flight endurance time. As such, there is a need for new tools, benchmarks, and methodologies to foster the systematic development of autonomous MAVs. In this paper, we introduce the `MAVBench framework which consists of a closed-loop simulator and an end-to-end application benchmark suite. A closed-loop simulation platform is needed to probe and understand the intra-system (application data flow) and inter-system (system and environment) interactions in MAV applications to pinpoint bottlenecks and identify opportunities for hardware and software co-design and optimization. In addition to the simulator, MAVBench provides a benchmark suite, the first of its kind, consisting of a variety of MAV applications designed to enable computer architects to perform characterization and develop future aerial computing systems. Using our open source, end-to-end experimental platform, we uncover a hidden, and thus far unexpected compute to total system energy relationship in MAVs. Furthermore, we explore the role of compute by presenting three case studies targeting performance, energy and reliability. These studies confirm that an efficient system design can improve MAVs battery consumption by up to 1.8X.
High-accuracy absolute localization for a team of vehicles is essential when accomplishing various kinds of tasks. As a promising approach, collaborative localization fuses the individual motion measurements and the inter-vehicle measurements to collaboratively estimate the states. In this paper, we focus on the range-only collaborative localization, which specifies the inter-vehicle measurements as inter-vehicle ranging measurements. We first investigate the observability properties of the system and derive that to achieve bounded localization errors, two vehicles are required to remain static like external infrastructures. Under the guide of the observability analysis, we then propose our range-only collaborative localization system which categorize the ground vehicles into two static vehicles and dynamic vehicles. The vehicles are connected utilizing a UWB network that is capable of both producing inter-vehicle ranging measurements and communication. Simulation results validate the observability analysis and demonstrate that collaborative localization is capable of achieving higher accuracy when utilizing the inter-vehicle measurements. Extensive experimental results are performed for a team of 3 and 5 vehicles. The real-world results illustrate that our proposed system enables accurate and real-time estimation of all vehicles absolute poses.