No Arabic abstract
Complex aircraft systems are becoming a target for automation. For successful operation, they require both efficient and readable mission execution system. Flight control computer (FCC) units, as well as all important subsystems, are often duplicated. Discrete nature of mission execution systems does not allow small differences in data flow among redundant FCCs which are acceptable for continuous control algorithms. Therefore, mission state consistency has to be specifically maintained. We present a novel mission execution system which includes FCC state synchronization. To achieve this result we developed a new concept of Asynchronous Behavior Tree with Memory and proposed a state synchronization algorithm. The implemented system was tested and proven to work in a real-time simulation of High Altitude Pseudo Satellite (HAPS) mission.
There exists an increasing demand for a flexible and computationally efficient controller for micro aerial vehicles (MAVs) due to a high degree of environmental perturbations. In this work, an evolving neuro-fuzzy controller, namely Parsimonious Controller (PAC) is proposed. It features fewer network parameters than conventional approaches due to the absence of rule premise parameters. PAC is built upon a recently developed evolving neuro-fuzzy system known as parsimonious learning machine (PALM) and adopts new rule growing and pruning modules derived from the approximation of bias and variance. These rule adaptation methods have no reliance on user-defined thresholds, thereby increasing the PACs autonomy for real-time deployment. PAC adapts the consequent parameters with the sliding mode control (SMC) theory in the single-pass fashion. The boundedness and convergence of the closed-loop control systems tracking error and the controllers consequent parameters are confirmed by utilizing the LaSalle-Yoshizawa theorem. Lastly, the controllers efficacy is evaluated by observing various trajectory tracking performance from a bio-inspired flapping-wing micro aerial vehicle (BI-FWMAV) and a rotary wing micro aerial vehicle called hexacopter. Furthermore, it is compared to three distinctive controllers. Our PAC outperforms the linear PID controller and feed-forward neural network (FFNN) based nonlinear adaptive controller. Compared to its predecessor, G-controller, the tracking accuracy is comparable, but the PAC incurs significantly fewer parameters to attain similar or better performance than the G-controller.
In this work, we address the estimation, planning, control and mapping problems to allow a small quadrotor to autonomously inspect the interior of hazardous damaged nuclear sites. These algorithms run onboard on a computationally limited CPU. We investigate the effect of varying illumination on the system performance. To the best of our knowledge, this is the first fully autonomous system of this size and scale applied to inspect the interior of a full scale mock-up of a Primary Containment Vessel (PCV). The proposed solution opens up new ways to inspect nuclear reactors and to support nuclear decommissioning, which is well known to be a dangerous, long and tedious process. Experimental results with varying illumination conditions show the ability to navigate a full scale mock-up PCV pedestal and create a map of the environment, while concurrently avoiding obstacles.
In this paper, we propose Belief Behavior Trees (BBTs), an extension to Behavior Trees (BTs) that allows to automatically create a policy that controls a robot in partially observable environments. We extend the semantic of BTs to account for the uncertainty that affects both the conditions and action nodes of the BT. The tree gets synthesized following a planning strategy for BTs proposed recently: from a set of goal conditions we iteratively select a goal and find the action, or in general the subtree, that satisfies it. Such action may have preconditions that do not hold. For those preconditions, we find an action or subtree in the same fashion. We extend this approach by including, in the planner, actions that have the purpose to reduce the uncertainty that affects the value of a condition node in the BT (for example, turning on the lights to have better lighting conditions). We demonstrate that BBTs allows task planning with non-deterministic outcomes for actions. We provide experimental validation of our approach in a real robotic scenario and - for sake of reproducibility - in a simulated one.
Autonomous-mobile cyber-physical machines are part of our future. Specifically, unmanned-aerial-vehicles have seen a resurgence in activity with use-cases such as package delivery. These systems face many challenges such as their low-endurance caused by limited onboard-energy, hence, improving the mission-time and energy are of importance. Such improvements traditionally are delivered through better algorithms. But our premise is that more powerful and efficient onboard-compute should also address the problem. This paper investigates how the compute subsystem, in a cyber-physical mobile machine, such as a Micro Aerial Vehicle, impacts mission-time and energy. Specifically, we pose the question as what is the role of computing for cyber-physical mobile robots? We show that compute and motion are tightly intertwined, hence a close examination of cyber and physical processes and their impact on one another is necessary. We show different impact paths through which compute impacts mission-metrics and examine them using analytical models, simulation, and end-to-end benchmarking. To enable similar studies, we open sourced MAVBench, our tool-set consisting of a closed-loop simulator and a benchmark suite. Our investigations show cyber-physical co-design, a methodology where robots cyber and physical processes/quantities are developed with one another consideration, similar to hardware-software co-design, is necessary for optimal robot design.
This paper considers the problem of asynchronous distributed multi-agent optimization on server-based system architecture. In this problem, each agent has a local cost, and the goal for the agents is to collectively find a minimum of their aggregate cost. A standard algorithm to solve this problem is the iterative distributed gradient-descent (DGD) method being implemented collaboratively by the server and the agents. In the synchronous setting, the algorithm proceeds from one iteration to the next only after all the agents complete their expected communication with the server. However, such synchrony can be expensive and even infeasible in real-world applications. We show that waiting for all the agents is unnecessary in many applications of distributed optimization, including distributed machine learning, due to redundancy in the cost functions (or {em data}). Specifically, we consider a generic notion of redundancy named $(r,epsilon)$-redundancy implying solvability of the original multi-agent optimization problem with $epsilon$ accuracy, despite the removal of up to $r$ (out of total $n$) agents from the system. We present an asynchronous DGD algorithm where in each iteration the server only waits for (any) $n-r$ agents, instead of all the $n$ agents. Assuming $(r,epsilon)$-redundancy, we show that our asynchronous algorithm converges to an approximate solution with error that is linear in $epsilon$ and $r$. Moreover, we also present a generalization of our algorithm to tolerate some Byzantine faulty agents in the system. Finally, we demonstrate the improved communication efficiency of our algorithm through experiments on MNIST and Fashion-MNIST using the benchmark neural network LeNet.