ترغب بنشر مسار تعليمي؟ اضغط هنا

Optimal byzantine resilient convergence in oblivious robot networks

269   0   0.0 ( 0 )
 نشر من قبل Maria Potop-Butucaru
 تاريخ النشر 2009
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English
 تأليف Zohir Bouzid




اسأل ChatGPT حول البحث

Given a set of robots with arbitrary initial location and no agreement on a global coordinate system, convergence requires that all robots asymptotically approach the exact same, but unknown beforehand, location. Robots are oblivious-- they do not recall the past computations -- and are allowed to move in a one-dimensional space. Additionally, robots cannot communicate directly, instead they obtain system related information only via visual sensors. We draw a connection between the convergence problem in robot networks, and the distributed emph{approximate agreement} problem (that requires correct processes to decide, for some constant $epsilon$, values distance $epsilon$ apart and within the range of initial proposed values). Surprisingly, even though specifications are similar, the convergence implementation in robot networks requires specific assumptions about synchrony and Byzantine resilience. In more details, we prove necessary and sufficient conditions for the convergence of mobile robots despite a subset of them being Byzantine (i.e. they can exhibit arbitrary behavior). Additionally, we propose a deterministic convergence algorithm for robot networks and analyze its correctness and complexity in various synchrony settings. The proposed algorithm tolerates f Byzantine robots for (2f+1)-sized robot networks in fully synchronous networks, (3f+1)-sized in semi-synchronous networks. These bounds are optimal for the class of cautious algorithms, which guarantee that correct robots always move inside the range of positions of the correct robots.



قيم البحث

اقرأ أيضاً

134 - Zohir Bouzid 2009
We propose the first deterministic algorithm that tolerates up to $f$ byzantine faults in $3f+1$-sized networks and performs in the asynchronous CORDA model. Our solution matches the previously established lower bound for the semi-synchronous ATOM mo del on the number of tolerated Byzantine robots. Our algorithm works under bounded scheduling assumptions for oblivious robots moving in a uni-dimensional space.
136 - Zohir Bouzid 2009
We study the convergence problem in fully asynchronous, uni-dimensional robot networks that are prone to Byzantine (i.e. malicious) failures. In these settings, oblivious anonymous robots with arbitrary initial positions are required to eventually co nverge to an a apriori unknown position despite a subset of them exhibiting Byzantine behavior. Our contribution is twofold. We propose a deterministic algorithm that solves the problem in the most generic settings: fully asynchronous robots that operate in the non-atomic CORDA model. Our algorithm provides convergence in 5f+1-sized networks where f is the upper bound on the number of Byzantine robots. Additionally, we prove that 5f+1 is a lower bound whenever robot scheduling is fully asynchronous. This constrasts with previous results in partially synchronous robots networks, where 3f+1 robots are necessary and sufficient.
This paper introduces the emph{RoboCast} communication abstraction. The RoboCast allows a swarm of non oblivious, anonymous robots that are only endowed with visibility sensors and do not share a common coordinate system, to asynchronously exchange i nformation. We propose a generic framework that covers a large class of asynchronous communication algorithms and show how our framework can be used to implement fundamental building blocks in robot networks such as gathering or stigmergy. In more details, we propose a RoboCast algorithm that allows robots to broadcast their local coordinate systems to each others. Our algorithm is further refined with a local collision avoidance scheme. Then, using the RoboCast primitive, we propose algorithms for deterministic asynchronous gathering and binary information exchange.
102 - Kaiyun Li , Xiaojun Chen , Ye Dong 2021
Distributed Learning often suffers from Byzantine failures, and there have been a number of works studying the problem of distributed stochastic optimization under Byzantine failures, where only a portion of workers, instead of all the workers in a d istributed learning system, compute stochastic gradients at each iteration. These methods, albeit workable under Byzantine failures, have the shortcomings of either a sub-optimal convergence rate or high computation cost. To this end, we propose a new Byzantine-resilient stochastic gradient descent algorithm (BrSGD for short) which is provably robust against Byzantine failures. BrSGD obtains the optimal statistical performance and efficient computation simultaneously. In particular, BrSGD can achieve an order-optimal statistical error rate for strongly convex loss functions. The computation complexity of BrSGD is O(md), where d is the model dimension and m is the number of machines. Experimental results show that BrSGD can obtain competitive results compared with non-Byzantine machines in terms of effectiveness and convergence.
For mitigating Byzantine behaviors in federated learning (FL), most state-of-the-art approaches, such as Bulyan, tend to leverage the similarity of updates from the benign clients. However, in many practical FL scenarios, data is non-IID across clien ts, thus the updates received from even the benign clients are quite dissimilar. Hence, using similarity based methods result in wasted opportunities to train a model from interesting non-IID data, and also slower model convergence. We propose DiverseFL to overcome this challenge in heterogeneous data distribution settings. Rather than comparing each clients update with other client updates to detect Byzantine clients, DiverseFL compares each clients update with a guiding update of that client. Any client whose update diverges from its associated guiding update is then tagged as a Byzantine node. The FL server in DiverseFL computes the guiding update in every round for each client over a small sample of the clients local data that is received only once before start of the training. However, sharing even a small sample of clients data with the FL server can compromise clients data privacy needs. To tackle this challenge, DiverseFL creates a Trusted Execution Environment (TEE)-based enclave to receive each clients sample and to compute its guiding updates. TEE provides a hardware assisted verification and attestation to each client that its data is not leaked outside of TEE. Through experiments involving neural networks, benchmark datasets and popular Byzantine attacks, we demonstrate that DiverseFL not only performs Byzantine mitigation quite effectively, it also almost matches the performance of OracleSGD, where the server only aggregates the updates from the benign clients.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا