Do you want to publish a course? Click here

Best-effort Group Service in Dynamic Networks

209   0   0.0 ( 0 )
 Added by Franck Petit
 Publication date 2010
and research's language is English




Ask ChatGPT about the research

We propose a group membership service for dynamic ad hoc networks. It maintains as long as possible the existing groups and ensures that each group diameter is always smaller than a constant, fixed according to the application using the groups. The proposed protocol is self-stabilizing and works in dynamic distributed systems. Moreover, it ensures a kind of continuity in the service offer to the application while the system is converging, except if too strong topology changes happen. Such a best effort behavior allows applications to rely on the groups while the stabilization has not been reached, which is very useful in dynamic ad hoc networks.



rate research

Read More

We investigate a special case of hereditary property that we refer to as {em robustness}. A property is {em robust} in a given graph if it is inherited by all connected spanning subgraphs of this graph. We motivate this definition in different contexts, showing that it plays a central role in highly dynamic networks, although the problem is defined in terms of classical (static) graph theory. In this paper, we focus on the robustness of {em maximal independent sets} (MIS). Following the above definition, a MIS is said to be {em robust} (RMIS) if it remains a valid MIS in all connected spanning subgraphs of the original graph. We characterize the class of graphs in which {em all} possible MISs are robust. We show that, in these particular graphs, the problem of finding a robust MIS is {em local}; that is, we present an RMIS algorithm using only a sublogarithmic number of rounds (in the number of nodes $n$) in the ${cal LOCAL}$ model. On the negative side, we show that, in general graphs, the problem is not local. Precisely, we prove a $Omega(n)$ lower bound on the number of rounds required for the nodes to decide consistently in some graphs. This result implies a separation between the RMIS problem and the MIS problem in general graphs. It also implies that any strategy in this case is asymptotically (in order) as bad as collecting all the network information at one node and solving the problem in a centralized manner. Motivated by this observation, we present a centralized algorithm that computes a robust MIS in a given graph, if one exists, and rejects otherwise. Significantly, this algorithm requires only a polynomial amount of local computation time, despite the fact that exponentially many MISs and exponentially many connected spanning subgraphs may exist.
Smoothed analysis is a framework suggested for mediating gaps between worst-case and average-case complexities. In a recent work, Dinitz et al.~[Distributed Computing, 2018] suggested to use smoothed analysis in order to study dynamic networks. Their aim was to explain the gaps between real-world networks that function well despite being dynamic, and the strong theoretical lower bounds for arbitrary networks. To this end, they introduced a basic model of smoothing in dynamic networks, where an adversary picks a sequence of graphs, representing the topology of the network over time, and then each of these graphs is slightly perturbed in a random manner. The model suggested above is based on a per-round noise, and our aim in this work is to extend it to models of noise more suited for multiple rounds. This is motivated by long-lived networks, where the amount and location of noise may vary over time. To this end, we present several different models of noise. First, we extend the previous model to cases where the amount of noise is very small. Then, we move to more refined models, where the amount of noise can change between different rounds, e.g., as a function of the number of changes the network undergoes. We also study a model where the noise is not arbitrarily spread among the network, but focuses in each round in the areas where changes have occurred. Finally, we study the power of an adaptive adversary, who can choose its actions in accordance with the changes that have occurred so far. We use the flooding problem as a running case-study, presenting very different behaviors under the different models of noise, and analyze the flooding time in different models.
We study the problem of clock synchronization in highly dynamic networks, where communication links can appear or disappear at any time. The nodes in the network are equipped with hardware clocks, but the rate of the hardware clocks can vary arbitrarily within specific bounds, and the estimates that nodes can obtain about the clock values of other nodes are inherently inaccurate. Our goal in this setting is to output a logical clock at each node such that the logical clocks of any two nodes are not too far apart, and nodes that remain close to each other in the network for a long time are better synchronized than distant nodes. This property is called gradient clock synchronization. Gradient clock synchronization has been widely studied in the static setting, where the network topology does not change. We show that the asymptotically optimal bounds obtained for the static case also apply to our highly dynamic setting: if two nodes remain at distance $d$ from each other for sufficiently long, it is possible to upper bound the difference between their clock values by $O(d log (D / d))$, where $D$ is the diameter of the network. This is known to be optimal even for static networks. Furthermore, we show that our algorithm has optimal stabilization time: when a path of length $d$ appears between two nodes, the time required until the clock skew between the two nodes is reduced to $O(d log (D / d))$ is $O(D)$, which we prove to be optimal. Finally, the techniques employed for the more intricate analysis of the algorithm for dynamic graphs provide additional insights that are also of interest for the static setting. In particular, we establish self-stabilization of the gradient property within $O(D)$ time.
Standard approaches to group-based notions of fairness, such as emph{parity} and emph{equalized odds}, try to equalize absolute measures of performance across known groups (based on race, gender, etc.). Consequently, a group that is inherently harder to classify may hold back the performance on other groups; and no guarantees can be provided for unforeseen groups. Instead, we propose a fairness notion whose guarantee, on each group $g$ in a class $mathcal{G}$, is relative to the performance of the best classifier on $g$. We apply this notion to broad classes of groups, in particular, where (a) $mathcal{G}$ consists of all possible groups (subsets) in the data, and (b) $mathcal{G}$ is more streamlined. For the first setting, which is akin to groups being completely unknown, we devise the {sc PF} (Proportional Fairness) classifier, which guarantees, on any possible group $g$, an accuracy that is proportional to that of the optimal classifier for $g$, scaled by the relative size of $g$ in the data set. Due to including all possible groups, some of which could be too complex to be relevant, the worst-case theoretical guarantees here have to be proportionally weaker for smaller subsets. For the second setting, we devise the {sc BeFair} (Best-effort Fair) framework which seeks an accuracy, on every $g in mathcal{G}$, which approximates that of the optimal classifier on $g$, independent of the size of $g$. Aiming for such a guarantee results in a non-convex problem, and we design novel techniques to get around this difficulty when $mathcal{G}$ is the set of linear hypotheses. We test our algorithms on real-world data sets, and present interesting comparative insights on their performance.
In this paper, we study systems of distributed entities that can actively modify their communication network. This gives rise to distributed algorithms that apart from communication can also exploit network reconfiguration in order to carry out a given task. At the same time, the distributed task itself may now require global reconfiguration from a given initial network $G_s$ to a target network $G_f$ from a family of networks having some good properties, like small diameter. With reasonably powerful computational entities, there is a straightforward algorithm that transforms any $G_s$ into a spanning clique in $O(log n)$ time. The algorithm can then compute any global function on inputs and reconfigure to any target network in one round. We argue that such a strategy is impractical for real applications. In real dynamic networks there is a cost associated with creating and maintaining connections. To formally capture such costs, we define three edge-complexity measures: the emph{total edge activations}, the emph{maximum activated edges per round}, and the emph{maximum activated degree of a node}. The clique formation strategy highlighted above, maximizes all of them. We aim at improved algorithms that achieve (poly)log$(n)$ time while minimizing the edge-complexity for the general task of transforming any $G_s$ into a $G_f$ of diameter (poly)log$(n)$. We give three distributed algorithms. The first runs in $O(log n)$ time, with at most $2n$ active edges per round, an optimal total of $O(nlog n)$ edge activations, a maximum degree $n-1$, and a target network of diameter 2. The second achieves bounded degree by paying an additional logarithmic factor in time and in total edge activations and gives a target network of diameter $O(log n)$. Our third algorithm shows that if we slightly increase the maximum degree to polylog$(n)$ then we can achieve a running time of $o(log^2 n)$.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا