No Arabic abstract
Applications of safety, security, and rescue in robotics, such as multi-robot target tracking, involve the execution of information acquisition tasks by teams of mobile robots. However, in failure-prone or adversarial environments, robots get attacked, their communication channels get jammed, and their sensors may fail, resulting in the withdrawal of robots from the collective task, and consequently the inability of the remaining active robots to coordinate with each other. As a result, traditional design paradigms become insufficient and, in contrast, resilient designs against system-wide failures and attacks become important. In general, resilient design problems are hard, and even though they often involve objective functions that are monotone or submodular, scalable approximation algorithms for their solution have been hitherto unknown. In this paper, we provide the first algorithm, enabling the following capabilities: minimal communication, i.e., the algorithm is executed by the robots based only on minimal communication between them; system-wide resiliency, i.e., the algorithm is valid for any number of denial-of-service attacks and failures; and provable approximation performance, i.e., the algorithm ensures for all monotone (and not necessarily submodular) objective functions a solution that is finitely close to the optimal. We quantify our algorithms approximation performance using a notion of curvature for monotone set functions. We support our theoretical analyses with simulated and real-world experiments, by considering an active information gathering scenario, namely, multi-robot target tracking.
Gathering is a fundamental coordination problem in cooperative mobile robotics. In short, given a set of robots with arbitrary initial locations and no initial agreement on a global coordinate system, gathering requires that all robots, following their algorithm, reach the exact same but not predetermined location. Gathering is particularly challenging in networks where robots are oblivious (i.e., stateless) and direct communication is replaced by observations on their respective locations. Interestingly any algorithm that solves gathering with oblivious robots is inherently self-stabilizing if no specific assumption is made on the initial distribution of the robots. In this paper, we significantly extend the studies of de-terministic gathering feasibility under different assumptions This manuscript considerably extends preliminary results presented as an extended abstract at the DISC 2006 conference [7]. The current version is under review at Distributed Computing Journal since February 2012 (in a previous form) and since 2014 in the current form. The most important results have been also presented in MAC 2010 organized in Ottawa from August 15th to 17th 2010 related to synchrony and faults (crash and Byzantine). Unlike prior work, we consider a larger set of scheduling strategies, such as bounded schedulers. In addition, we extend our study to the feasibility of probabilistic self-stabilizing gathering in both fault-free and fault-prone environments.
This paper considers the problem of planning trajectories for a team of sensor-equipped robots to reduce uncertainty about a dynamical process. Optimizing the trade-off between information gain and energy cost (e.g., control effort, distance travelled) is desirable but leads to a non-monotone objective function in the set of robot trajectories. Therefore, common multi-robot planning algorithms based on techniques such as coordinate descent lose their performance guarantees. Methods based on local search provide performance guarantees for optimizing a non-monotone submodular function, but require access to all robots trajectories, making it not suitable for distributed execution. This work proposes a distributed planning approach based on local search and shows how lazy/greedy methods can be adopted to reduce the computation and communication of the approach. We demonstrate the efficacy of the proposed method by coordinating robot teams composed of both ground and aerial vehicles with different sensing/control profiles and evaluate the algorithms performance in two target tracking scenarios. Compared to the naive distributed execution of local search, our approach saves up to 60% communication and 80--92% computation on average when coordinating up to 10 robots, while outperforming the coordinate descent based algorithm in achieving a desirable trade-off between sensing and energy cost.
We consider a swarm of $n$ autonomous mobile robots, distributed on a 2-dimensional grid. A basic task for such a swarm is the gathering process: All robots have to gather at one (not predefined) place. A common local model for extremely simple robots is the following: The robots do not have a common compass, only have a constant viewing radius, are autonomous and indistinguishable, can move at most a constant distance in each step, cannot communicate, are oblivious and do not have flags or states. The only gathering algorithm under this robot model, with known runtime bounds, needs $mathcal{O}(n^2)$ rounds and works in the Euclidean plane. The underlying time model for the algorithm is the fully synchronous $mathcal{FSYNC}$ model. On the other side, in the case of the 2-dimensional grid, the only known gathering algorithms for the same time and a similar local model additionally require a constant memory, states and flags to communicate these states to neighbors in viewing range. They gather in time $mathcal{O}(n)$. In this paper we contribute the (to the best of our knowledge) first gathering algorithm on the grid that works under the same simple local model as the above mentioned Euclidean plane strategy, i.e., without memory (oblivious), flags and states. We prove its correctness and an $mathcal{O}(n^2)$ time bound in the fully synchronous $mathcal{FSYNC}$ time model. This time bound matches the time bound of the best known algorithm for the Euclidean plane mentioned above. We say gathering is done if all robots are located within a $2times 2$ square, because in $mathcal{FSYNC}$ such configurations cannot be solved.
We present an active visual search model for finding objects in unknown environments. The proposed algorithm guides the robot towards the sought object using the relevant stimuli provided by the visual sensors. Existing search strategies are either purely reactive or use simplified sensor models that do not exploit all the visual information available. In this paper, we propose a new model that actively extracts visual information via visual attention techniques and, in conjunction with a non-myopic decision-making algorithm, leads the robot to search more relevant areas of the environment. The attention module couples both top-down and bottom-up attention models enabling the robot to search regions with higher importance first. The proposed algorithm is evaluated on a mobile robot platform in a 3D simulated environment. The results indicate that the use of visual attention significantly improves search, but the degree of improvement depends on the nature of the task and the complexity of the environment. In our experiments, we found that performance enhancements of up to 42% in structured and 38% in highly unstructured cluttered environments can be achieved using visual attention mechanisms.
The paper focuses on collision-inclusive motion planning for impact-resilient mobile robots. We propose a new deformation recovery and replanning strategy to handle collisions that may occur at run-time. Contrary to collision avoidance methods that generate trajectories only in conservative local space or require collision checking that has high computational cost, our method directly generates (local) trajectories with imposing only waypoint constraints. If a collision occurs, our method then estimates the post-impact state and computes from there an intermediate waypoint to recover from the collision. To achieve so, we develop two novel components: 1) a deformation recovery controller that optimizes the robots states during post-impact recovery phase, and 2) a post-impact trajectory replanner that adjusts the next waypoint with the information from the collision for the robot to pass through and generates a polynomial-based minimum effort trajectory. The proposed strategy is evaluated experimentally with an omni-directional impact-resilient wheeled robot. The robot is designed in house, and it can perceive collisions with the aid of Hall effect sensors embodied between the robots main chassis and a surrounding deflection ring-like structure.