ترغب بنشر مسار تعليمي؟ اضغط هنا

On Simple Back-Off in Unreliable Radio Networks

216   0   0.0 ( 0 )
 نشر من قبل Dominik Pajak
 تاريخ النشر 2018
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

In this paper, we study local and global broadcast in the dual graph model, which describes communication in a radio network with both reliable and unreliable links. Existing work proved that efficient solutions to these problems are impossible in the dual graph model under standard assumptions. In real networks, however, simple back-off strategies tend to perform well for solving these basic communication tasks. We address this apparent paradox by introducing a new set of constraints to the dual graph model that better generalize the slow/fast fading behavior common in real networks. We prove that in the context of these new constraints, simple back-off strategies now provide efficient solutions to local and global broadcast in the dual graph model. We also precisely characterize how this efficiency degrades as the new constraints are reduced down to non-existent, and prove new lower bounds that establish this degradation as near optimal for a large class of natural algorithms. We conclude with a preliminary investigation of the performance of these strategies when we include additional generality to the model. These results provide theoretical foundations for the practical observation that simple back-off algorithms tend to work well even amid the complicated link dynamics of real radio networks.



قيم البحث

اقرأ أيضاً

299 - Calvin Newport 2014
In this paper, we study lower bounds for randomized solutions to the maximal independent set (MIS) and connected dominating set (CDS) problems in the dual graph model of radio networks---a generalization of the standard graph-based model that now inc ludes unreliable links controlled by an adversary. We begin by proving that a natural geographic constraint on the network topology is required to solve these problems efficiently (i.e., in time polylogarthmic in the network size). We then prove the importance of the assumption that nodes are provided advance knowledge of their reliable neighbors (i.e, neighbors connected by reliable links). Combined, these results answer an open question by proving that the efficient MIS and CDS algorithms from [Censor-Hillel, PODC 2011] are optimal with respect to their dual graph model assumptions. They also provide insight into what properties of an unreliable network enable efficient local computation.
In recent years, protocols that are based on the properties of random walks on graphs have found many applications in communication and information networks, such as wireless networks, peer-to-peer networks and the Web. For wireless networks (and oth er networks), graphs are actually not the correct model of the communication; instead hyper-graphs better capture the communication over a wireless shared channel. Motivated by this example, we study in this paper random walks on hyper-graphs. First, we formalize the random walk process on hyper-graphs and generalize key notions from random walks on graphs. We then give the novel definition of radio cover time, namely, the expected time of a random walk to be heard (as opposed to visit) by all nodes. We then provide some basic bounds on the radio cover, in particular, we show that while on graphs the radio cover time is O(mn), in hyper-graphs it is O(mnr) where n, m and r are the number of nodes, the number of edges and the rank of the hyper-graph, respectively. In addition, we define radio hitting times and give a polynomial algorithm to compute them. We conclude the paper with results on specific hyper-graphs that model wireless networks in one and two dimensions.
Data centres that use consumer-grade disks drives and distributed peer-to-peer systems are unreliable environments to archive data without enough redundancy. Most redundancy schemes are not completely effective for providing high availability, durabi lity and integrity in the long-term. We propose alpha entanglement codes, a mechanism that creates a virtual layer of highly interconnected storage devices to propagate redundant information across a large scale storage system. Our motivation is to design flexible and practical erasure codes with high fault-tolerance to improve data durability and availability even in catastrophic scenarios. By flexible and practical, we mean code settings that can be adapted to future requirements and practical implementations with reasonable trade-offs between security, resource usage and performance. The codes have three parameters. Alpha increases storage overhead linearly but increases the possible paths to recover data exponentially. Two other parameters increase fault-tolerance even further without the need of additional storage. As a result, an entangled storage system can provide high availability, durability and offer additional integrity: it is more difficult to modify data undetectably. We evaluate how several redundancy schemes perform in unreliable environments and show that alpha entanglement codes are flexible and practical codes. Remarkably, they excel at code locality, hence, they reduce repair costs and become less dependent on storage locations with poor availability. Our solution outperforms Reed-Solomon codes in many disaster recovery scenarios.
The Cloud infrastructure offers to end users a broad set of heterogenous computational resources using the pay-as-you-go model. These virtualized resources can be provisioned using different pricing models like the unreliable model where resources ar e provided at a fraction of the cost but with no guarantee for an uninterrupted processing. However, the enormous gamut of opportunities comes with a great caveat as resource management and scheduling decisions are increasingly complicated. Moreover, the presented uncertainty in optimally selecting resources has also a negatively impact on the quality of solutions delivered by scheduling algorithms. In this paper, we present a dynamic scheduling algorithm (i.e., the Uncertainty-Driven Scheduling - UDS algorithm) for the management of scientific workflows in Cloud. Our model minimizes both the makespan and the monetary cost by dynamically selecting reliable or unreliable virtualized resources. For covering the uncertainty in decision making, we adopt a Fuzzy Logic Controller (FLC) to derive the pricing model of the resources that will host every task. We evaluate the performance of the proposed algorithm using real workflow applications being tested under the assumption of different probabilities regarding the revocation of unreliable resources. Numerical results depict the performance of the proposed approach and a comparative assessment reveals the position of the paper in the relevant literature.
The problem of reliability of a large distributed system is analyzed via a new mathematical model. A typical framework is a system where a set of files are duplicated on several data servers. When one of these servers breaks down, all copies of files stored on it are lost. In this way, repeated failures may lead to losses of files. The efficiency of such a network is directly related to the performances of the mechanism used to duplicate files on servers. In this paper we study the evolution of the network using a natural duplication policy giving priority to the files with the least number of copies. We investigate the asymptotic behavior of the network when the number $N$ of servers is large. The analysis is complicated by the large dimension of the state space of the empirical distribution of the state of the network. A stochastic model of the evolution of the network which has values in state space whose dimension does not depend on $N$ is introduced. Despite this description does not have the Markov property, it turns out that it is converging in distribution, when the number of nodes goes to infinity, to a nonlinear Markov process. The rate of decay of the network, which is the key characteristic of interest of these systems, can be expressed in terms of this asymptotic process. The corresponding mean-field convergence results are established. A lower bound on the exponential decay, with respect to time, of the fraction of the number of initial files with at least one copy is obtained.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا