ترغب بنشر مسار تعليمي؟ اضغط هنا

On the Importance of Having an Identity or, is Consensus really Universal?

54   0   0.0 ( 0 )
 نشر من قبل Paul Vitanyi
 تاريخ النشر 2002
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English
 تأليف Harry Buhrman




اسأل ChatGPT حول البحث

We show that Naming-- the existence of distinct IDs known to all-- is a hidden but necessary assumption of Herlihys universality result for Consensus. We then show in a very precise sense that Naming is harder than Consensus and bring to the surface some important differences existing between popular shared memory models.



قيم البحث

اقرأ أيضاً

280 - Calvin Newport 2014
In this paper, we study distributed consensus in the radio network setting. We produce new upper and lower bounds for this problem in an abstract MAC layer model that captures the key guarantees provided by most wireless MAC layers. In more detail, w e first generalize the well-known impossibility of deterministic consensus with a single crash failure [FLP 1895] from the asynchronous message passing model to our wireless setting. Proceeding under the assumption of no faults, we then investigate the amount of network knowledge required to solve consensus in our model---an important question given that these networks are often deployed in an ad hoc manner. We prove consensus is impossible without unique ids or without knowledge of network size (in multihop topologies). We also prove a lower bound on optimal time complexity. We then match these lower bounds with a pair of new deterministic consensus algorithms---one for single hop topologies and one for multihop topologies---providing a comprehensive characterization of the consensus problem in the wireless setting. From a theoretical perspective, our results shed new insight into the role of network information and the power of MAC layer abstractions in solving distributed consensus. From a practical perspective, given the level of abstraction used by our model, our upper bounds can be easily implemented in real wireless devices on existing MAC layers while preserving their correctness guarantees---facilitating the development of wireless distributed systems.
We give fault-tolerant algorithms for establishing synchrony in distributed systems in which each of the $n$ nodes has its own clock. Our algorithms operate in a very strong fault model: we require self-stabilisation, i.e., the initial state of the s ystem may be arbitrary, and there can be up to $f<n/3$ ongoing Byzantine faults, i.e., nodes that deviate from the protocol in an arbitrary manner. Furthermore, we assume that the local clocks of the nodes may progress at different speeds (clock drift) and communication has bounded delay. In this model, we study the pulse synchronisation problem, where the task is to guarantee that eventually all correct nodes generate well-separated local pulse events (i.e., unlabelled logical clock ticks) in a synchronised manner. Compared to prior work, we achieve exponential improvements in stabilisation time and the number of communicated bits, and give the first sublinear-time algorithm for the problem: - In the deterministic setting, the state-of-the-art solutions stabilise in time $Theta(f)$ and have each node broadcast $Theta(f log f)$ bits per time unit. We exponentially reduce the number of bits broadcasted per time unit to $Theta(log f)$ while retaining the same stabilisation time. - In the randomised setting, the state-of-the-art solutions stabilise in time $Theta(f)$ and have each node broadcast $O(1)$ bits per time unit. We exponentially reduce the stabilisation time to $log^{O(1)} f$ while each node broadcasts $log^{O(1)} f$ bits per time unit. These results are obtained by means of a recursive approach reducing the above task of self-stabilising pulse synchronisation in the bounded-delay model to non-self-stabilising binary consensus in the synchronous model. In general, our approach introduces at most logarithmic overheads in terms of stabilisation time and broadcasted bits over the underlying consensus routine.
140 - John G. Hartnett 2011
The Hubble law, determined from the distance modulii and redshifts of galaxies, for the past 80 years, has been used as strong evidence for an expanding universe. This claim is reviewed in light of the claimed lack of necessary evidence for time dila tion in quasar and gamma-ray burst luminosity variations and other lines of evidence. It is concluded that the observations could be used to describe either a static universe (where the Hubble law results from some as-yet-unknown mechanism) or an expanding universe described by the standard Lambda cold dark matter model. In the latter case, size evolution of galaxies is necessary for agreement with observations. Yet the simple non-expanding Euclidean universe fits most data with the least number of assumptions. From this review it is apparent that there are still many unanswered questions in cosmology and the title question of this paper is still far from being answered.
186 - J.M. Marcaide 2002
SN1993J is to date the radio supernova whose evolution has been monitored in greatest detail and the one which holds best promise for a comprehensive theoretical-observational analysis. The shell-like radio structure of SN1993J has expanded in genera l accord with models of shock excited emission, showing almost circular symmetry for over 8 years, except for a bright feature at the south-eastern region of the shell that has been observed at every epoch. The spectrum of SN1993J has flattened from alpha =-1 to alpha =-0.67 (S_( u) propto nu**(alpha)). The decelerated expansion can be modeled well with a single slope but apparently better with two slopes. There are also intriguing hints of structure in the expansion curve. The results by the two VLBI groups carrying out this research show general agreement, but also some differences. A comparison of the optical and VLBI results about the details of the deceleration show some discrepancies.
The Ripple network is one of the most prominent blockchain platforms and its native XRP token currently has one of the highest cryptocurrency market capitalizations. The Ripple consensus protocol powers this network and is generally considered to a B yzantine fault-tolerant agreement protocol, which can reach consensus in the presence of faulty or malicious nodes. In contrast to traditional Byzantine agreement protocols, there is no global knowledge of all participating nodes in Ripple consensus; instead, each node declares a list of other nodes that it trusts and from which it considers votes. Previous work has brought up concerns about the liveness and safety of the consensus protocol under the general assumptions stated initially by Ripple, and there is currently no appropriate understanding of its workings and its properties in the literature. This paper closes this gap and makes two contributions. It first provides a detailed, abstract description of the protocol, which has been derived from the source code. Second, the paper points out that the abstract protocol may violate safety and liveness in several simple executions under relatively benign network assumptions.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا