ترغب بنشر مسار تعليمي؟ اضغط هنا

Self-Stabilization Through the Lens of Game Theory

94   0   0.0 ( 0 )
 نشر من قبل Krzysztof R. Apt
 تاريخ النشر 2019
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

In 1974 E.W. Dijkstra introduced the seminal concept of self-stabilization that turned out to be one of the main approaches to fault-tolerant computing. We show here how his three solutions can be formalized and reasoned about using the concepts of game theory. We also determine the precise number of steps needed to reach self-stabilization in his first solution.



قيم البحث

اقرأ أيضاً

Lagrangian duality underlies both classical and modern mechanism design. In particular, the dual perspective often permits simple and detail-free characterizations of optimal and approximately optimal mechanisms. This paper applies this same methodol ogy to a close cousin of traditional mechanism design, one which shares conceptual and technical elements with its more mature relative: the burgeoning field of persuasion. The dual perspective permits us to analyze optimal persuasion schemes both in settings which have been analyzed in prior work, as well as for natural generalizations which we are the first to explore in depth. Most notably, we permit combining persuasion policies with payments, which serve to augment the persuasion power of the scheme. In both single and multi-receiver settings, as well as under a variety of constraints on payments, we employ duality to obtain structural insights, as well as tractable and simple characterizations of optimal policies.
Payment channels were introduced to solve various eminent cryptocurrency scalability issues. Multiple payment channels build a network on top of a blockchain, the so-called layer 2. In this work, we analyze payment networks through the lens of networ k creation games. We identify betweenness and closeness centrality as central concepts regarding payment networks. We study the topologies that emerge when players act selfishly and determine the parameter space in which they constitute a Nash equilibrium. Moreover, we determine the social optima depending on the correlation of betweenness and closeness centrality. When possible, we bound the price of anarchy. We also briefly discuss the price of stability.
A traditional assumption in game theory is that players are opaque to one another -- if a player changes strategies, then this change in strategies does not affect the choice of other players strategies. In many situations this is an unrealistic assu mption. We develop a framework for reasoning about games where the players may be translucent to one another; in particular, a player may believe that if she were to change strategies, then the other player would also change strategies. Translucent players may achieve significantly more efficient outcomes than opaque ones. Our main result is a characterization of strategies consistent with appropriate analogues of common belief of rationality. Common Counterfactual Belief of Rationality (CCBR) holds if (1) everyone is rational, (2) everyone counterfactually believes that everyone else is rational (i.e., all players i believe that everyone else would still be rational even if i were to switch strategies), (3) everyone counterfactually believes that everyone else is rational, and counterfactually believes that everyone else is rational, and so on. CCBR characterizes the set of strategies surviving iterated removal of minimax dominated strategies: a strategy $sigma_i$ is minimax dominated for i if there exists a strategy $sigma_i$ for i such that $min_{mu_{-i}} u_i(sigma_i, mu_{-i}) > max_{mu_{-i}} u_i(sigma_i, mu_{-i})$.
Coded distributed computing (CDC) has emerged as a promising approach because it enables computation tasks to be carried out in a distributed manner while mitigating straggler effects, which often account for the long overall completion times. Specif ically, by using polynomial codes, computed results from only a subset of edge servers can be used to reconstruct the final result. However, incentive issues have not been studied systematically for the edge servers to complete the CDC tasks. In this paper, we propose a tractable two-level game-theoretic approach to incentivize the edge servers to complete the CDC tasks. Specifically, in the lower level, a hedonic coalition formation game is formulated where the edge servers share their resources within their coalitions. By forming coalitions, the edge servers have more Central Processing Unit (CPU) power to complete the computation tasks. In the upper level, given the CPU power of the coalitions of edge servers, an all-pay auction is designed to incentivize the edge servers to participate in the CDC tasks. In the all-pay auction, the bids of the edge servers are represented by the allocation of their CPU power to the CDC tasks. The all-pay auction is designed to maximize the utility of the cloud server by determining the allocation of rewards to the winners. Simulation results show that the edge servers are incentivized to allocate more CPU power when multiple rewards are offered, i.e., there are multiple winners, instead of rewarding only the edge server with the largest CPU power allocation. Besides, the utility of the cloud server is maximized when it offers multiple homogeneous rewards, instead of heterogeneous rewards.
468 - Lelia Blin 2007
Self-stabilization is a versatile technique to withstand any transient fault in a distributed system. Mobile robots (or agents) are one of the emerging trends in distributed computing as they mimic autonomous biologic entities. The contribution of th is paper is threefold. First, we present a new model for studying mobile entities in networks subject to transient faults. Our model differs from the classical robot model because robots have constraints about the paths they are allowed to follow, and from the classical agent model because the number of agents remains fixed throughout the execution of the protocol. Second, in this model, we study the possibility of designing self-stabilizing algorithms when those algorithms are run by mobile robots (or agents) evolving on a graph. We concentrate on the core building blocks of robot and agents problems: naming and leader election. Not surprisingly, when no constraints are given on the network graph topology and local execution model, both problems are impossible to solve. Finally, using minimal hypothesis with respect to impossibility results, we provide deterministic and probabilistic solutions to both problems, and show equivalence of these problems by an algorithmic reduction mechanism.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا