Do you want to publish a course? Click here

Separating Bounded and Unbounded Asynchrony for Autonomous Robots: Point Convergence with Limited Visibility

60   0   0.0 ( 0 )
 Added by Giuseppe Prencipe
 Publication date 2021
and research's language is English




Ask ChatGPT about the research

Among fundamental problems in the context of distributed computing by autonomous mobile entities, one of the most representative and well studied is {sc Point Convergence}: given an arbitrary initial configuration of identical entities, disposed in the Euclidean plane, move in such a way that, for all $eps>0$, a configuration in which the separation between all entities is at most $eps$ is eventually reached and maintained. The problem has been previously studied in a variety of settings, including full visibility, exact measurements (like distances and angles), and synchronous activation of entities. Our study concerns the minimal assumptions under which entities, moving asynchronously with limited and unknown visibility range and subject to limited imprecision in measurements, can be guaranteed to converge in this way. We present an algorithm that solves {sc Point Convergence}, for entities in the plane, in such a setting, provided the degree of asynchrony is bounded: while any one entity is active, any other entity can be activated at most $k$ times, for some arbitrarily large but fixed $k$. This provides a strong positive answer to a decade old open question posed by Katreniak. We also prove that in a comparable setting that permits unbounded asynchrony, {sc Point Convergence} in the plane is impossible, contingent on the natural assumption that algorithms maintain the (visible) connectivity among entities present in the initial configuration. This variant, that we call {sc Cohesive Convergence}, serves to distinguish the power of bounded and unbounded asynchrony in the control of autonomous mobile entities, settling at the same time a long-standing question whether in the Euclidean plane synchronously scheduled entities are more powerful than asynchronously scheduled entities.



rate research

Read More

136 - Zohir Bouzid 2009
We study the convergence problem in fully asynchronous, uni-dimensional robot networks that are prone to Byzantine (i.e. malicious) failures. In these settings, oblivious anonymous robots with arbitrary initial positions are required to eventually converge to an a apriori unknown position despite a subset of them exhibiting Byzantine behavior. Our contribution is twofold. We propose a deterministic algorithm that solves the problem in the most generic settings: fully asynchronous robots that operate in the non-atomic CORDA model. Our algorithm provides convergence in 5f+1-sized networks where f is the upper bound on the number of Byzantine robots. Additionally, we prove that 5f+1 is a lower bound whenever robot scheduling is fully asynchronous. This constrasts with previous results in partially synchronous robots networks, where 3f+1 robots are necessary and sufficient.
Optimal and Learning Control for Autonomous Robots has been taught in the Robotics, Systems and Controls Masters at ETH Zurich with the aim to teach optimal control and reinforcement learning for closed loop control problems from a unified point of view. The starting point is the formulation of of an optimal control problem and deriving the different types of solutions and algorithms from there. These lecture notes aim at supporting this unified view with a unified notation wherever possible, and a bit of a translation help to compare the terminology and notation in the different fields. The course assumes basic knowledge of Control Theory, Linear Algebra and Stochastic Calculus.
Camera anomalies like rain or dust can severelydegrade image quality and its related tasks, such as localizationand segmentation. In this work we address this importantissue by implementing a pre-processing step that can effectivelymitigate such artifacts in a real-time fashion, thus supportingthe deployment of autonomous systems with limited computecapabilities. We propose a shallow generator with aggregation,trained in an adversarial setting to solve the ill-posed problemof reconstructing the occluded regions. We add an enhancer tofurther preserve high-frequency details and image colorization.We also produce one of the largest publicly available datasets1to train our architecture and use realistic synthetic raindrops toobtain an improved initialization of the model. We benchmarkour framework on existing datasets and on our own imagesobtaining state-of-the-art results while enabling real-time per-formance, with up to 40x faster inference time than existingapproaches.
Quantifying behaviors of robots which were generated autonomously from task-independent objective functions is an important prerequisite for objective comparisons of algorithms and movements of animals. The temporal sequence of such a behavior can be considered as a time series and hence complexity measures developed for time series are natural candidates for its quantification. The predictive information and the excess entropy are such complexity measures. They measure the amount of information the past contains about the future and thus quantify the nonrandom structure in the temporal sequence. However, when using these measures for systems with continuous states one has to deal with the fact that their values will depend on the resolution with which the systems states are observed. For deterministic systems both measures will diverge with increasing resolution. We therefore propose a new decomposition of the excess entropy in resolution dependent and resolution independent parts and discuss how they depend on the dimensionality of the dynamics, correlations and the noise level. For the practical estimation we propose to use estimates based on the correlation integral instead of the direct estimation of the mutual information using the algorithm by Kraskov et al. (2004) which is based on next neighbor statistics because the latter allows less control of the scale dependencies. Using our algorithm we are able to show how autonomous learning generates behavior of increasing complexity with increasing learning duration.
We present a number of breakthroughs for coordinated motion planning, in which the objective is to reconfigure a swarm of labeled convex objects by a combination of parallel, continuous, collision-free translations into a given target arrangement. Problems of this type can be traced back to the classic work of Schwartz and Sharir (1983), who gave a method for deciding the existence of a coordinated motion for a set of disks between obstacles; their approach is polynomial in the complexity of the obstacles, but exponential in the number of disks. Other previous work has largely focused on {em sequential} schedules, in which one robot moves at a time. We provide constant-factor approximation algorithms for minimizing the execution time of a coordinated, {em parallel} motion plan for a swarm of robots in the absence of obstacles, provided some amount of separability. Our algorithm achieves {em constant stretch factor}: If all robots are at most $d$ units from their respective starting positions, the total duration of the overall schedule is $O(d)$. Extensions include unlabeled robots and different classes of robots. We also prove that finding a plan with minimal execution time is NP-hard, even for a grid arrangement without any stationary obstacles. On the other hand, we show that for densely packed disks that cannot be well separated, a stretch factor $Omega(N^{1/4})$ may be required. On the positive side, we establish a stretch factor of $O(N^{1/2})$ even in this case.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا