ترغب بنشر مسار تعليمي؟ اضغط هنا

Voting-based probabilistic consensuses and their applications in distributed ledgers

90   0   0.0 ( 0 )
 نشر من قبل Sebastian M\\\"uller
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

We review probabilistic models known as majority dynamics (also known as threshold Voter Models) and discuss their possible applications for achieving consensus in cryptocurrency systems. In particular, we show that using this approach straightforwardly for practical consensus in Byzantine setting can be problematic and requires extensive further research. We then discuss the FPC consensus protocol which circumvents the problems mentioned above by using external randomness.

قيم البحث

اقرأ أيضاً

Advances in mobile computing have paved the way for new types of distributed applications that can be executed solely by mobile devices on device-to-device (D2D) ecosystems (e.g., crowdsensing). Sophisticated applications, like cryptocurrencies, need distributed ledgers to function. Distributed ledgers, such as blockchains and directed acyclic graphs (DAGs), employ consensus protocols to add data in the form of blocks. However, such protocols are designed for resourceful devices that are interconnected via the Internet. Moreover, existing distributed ledgers are not deployable to D2D ecosystems since their storage needs are continuously increasing. In this work, we introduce and analyse Mneme, a DAG-based distributed ledger that can be maintained solely by mobile devices. Mneme utilizes two novel consensus protocols: Proof-of-Context (PoC) and Proof-of-Equivalence (PoE). PoC employs users context to add data on Mneme. PoE is executed periodically to summarize data and produce equivalent blocks that require less storage. We analyze Mnemes security and justify the ability of PoC and PoE to guarantee the characteristics of distributed ledgers: persistence and liveness. Furthermore, we analyze potential attacks from malicious users and prove that the probability of a successful attack is inversely proportional to the square of the number of mobile users who maintain Mneme.
117 - Jay Jay Billings 2018
Sharing provenance across workflow management systems automatically is not currently possible, but the value of such a capability is high since it could greatly reduce the amount of duplicated workflows, accelerate the discovery of new knowledge, and verify the integrity of past and present analyses. Although numerous technological challenges exist to efficiently share provenance information across workflow management systems, permissioned distributed ledgers could surmount many of them. The primary benefit of permissioned distributed ledgers over other technologies is that their distribution is over a peer-to-peer network that encodes transactions across the network into an immutable hash list and achieves consensus on the validity of the new data through a common consensus mechanism. This work discusses provenance and distributed ledgers on their own and then presents an argument that distributed ledgers naturally satisfy many of the requirements of workflow provenance, that provenance information can exist in the ledger in multiple ways, and that a number of novel research areas exist based on this strategy.
In public distributed ledger technologies (DLTs), such as Blockchains, nodes can join and leave the network at any time. A major challenge occurs when a new node joining the network wants to retrieve the current state of the ledger. Indeed, that node may receive conflicting information from honest and Byzantine nodes, making it difficult to identify the current state. In this paper, we are interested in protocols that are stateless, i.e., a new joining node should be able to retrieve the current state of the ledger just using a fixed amount of data that characterizes the ledger (such as the genesis block in Bitcoin). We define three variants of stateless DLTs: weak, strong, and probabilistic. Then, we analyze this property for DLTs using different types of consensus.
We describe here a structured system for distributed mechanism design appropriate for both Intranet and Internet applications. In our approach the players dynamically form a network in which they know neither their neighbours nor the size of the netw ork and interact to jointly take decisions. The only assumption concerning the underlying communication layer is that for each pair of processes there is a path of neighbours connecting them. This allows us to deal with arbitrary network topologies. We also discuss the implementation of this system which consists of a sequence of layers. The lower layers deal with the operations that implement the basic primitives of distributed computing, namely low level communication and distributed termination, while the upper layers use these primitives to implement high level communication among players, including broadcasting and multicasting, and distributed decision making. This yields a highly flexible distributed system whose specific applications are realized as instances of its top layer. This design is implemented in Java. The system supports at various levels fault-tolerance and includes a provision for distributed policing the purpose of which is to exclude `dishonest players. Also, it can be used for repeated creation of dynamically formed networks of players interested in a joint decision making implemented by means of a tax-based mechanism. We illustrate its flexibility by discussing a number of implemented examples.
105 - Cheng Luo , Lei Qu , Youshan Miao 2021
Distributed deep learning workloads include throughput-intensive training tasks on the GPU clusters, where the Distributed Stochastic Gradient Descent (SGD) incurs significant communication delays after backward propagation, forces workers to wait fo r the gradient synchronization via a centralized parameter server or directly in decentralized workers. We present CrossoverScheduler, an algorithm that enables communication cycles of a distributed training application to be filled by other applications through pipelining communication and computation. With CrossoverScheduler, the running performance of distributed training can be significantly improved without sacrificing convergence rate and network accuracy. We achieve so by introducing Crossover Synchronization which allows multiple distributed deep learning applications to time-share the same GPU alternately. The prototype of CrossoverScheduler is built and integrated with Horovod. Experiments on a variety of distributed tasks show that CrossoverScheduler achieves 20% times speedup for image classification tasks on ImageNet dataset.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا