Do you want to publish a course? Click here

Resource Pools and the CAP Theorem

75   0   0.0 ( 0 )
 Added by Tim Roughgarden
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

Blockchain protocols differ in fundamental ways, including the mechanics of selecting users to produce blocks (e.g., proof-of-work vs. proof-of-stake) and the method to establish consensus (e.g., longest chain rules vs. BFT-inspired protocols). These fundamental differences have hindered apples-to-apples comparisons between different categories of blockchain protocols and, in turn, the development of theory to formally discuss their relative merits. This paper presents a parsimonious abstraction sufficient for capturing and comparing properties of many well-known permissionless blockchain protocols, simultaneously capturing essential properties of both proof-of-work and proof-of-stake protocols, and of both longest-chain-type and BFT-type protocols. Our framework blackboxes the precise mechanics of the user selection process, allowing us to isolate the properties of the selection process which are significant for protocol design. We illustrate our frameworks utility with two results. First, we prove an analog of the CAP theorem from distributed computing for our framework in a partially synchronous setting. This theorem shows that a fundamental dichotomy holds between protocols (such as Bitcoin) that are adaptive, in the sense that they can function given unpredictable levels of participation, and protocols (such as Algorand) that have certain finality properties. Second, we formalize the idea that proof-of-work (PoW) protocols and non-PoW protocols can be distinguished by the forms of permission that users are given to carry out updates to the state.



rate research

Read More

In distributed applications, Brewers CAP theorem tells us that when networks become partitioned, there is a tradeoff between consistency and availability. Consistency is agreement on the values of shared variables across a system, and availability is the ability to respond to reads and writes accessing those shared variables. We quantify these concepts, giving numerical values to inconsistency and unavailability. Recognizing that network partitioning is not an all-or-nothing proposition, we replace the P in CAP with L, a numerical measure of apparent latency, and derive the CAL theorem, an algebraic relation between inconsistency, unavailability, and apparent latency. This relation shows that if latency becomes unbounded (e.g., the network becomes partitioned), then one of inconsistency and unavailability must also become unbounded, and hence the CAP theorem is a special case of the CAL theorem. We describe two distributed coordination mechanisms, which we have implemented as an extension of the Lingua Franca coordination language, that support arbitrary tradeoffs between consistency and availability as apparent latency varies. With centralized coordination, inconsistency remains bounded by a chosen numerical value at the cost that unavailability becomes unbounded under network partitioning. With decentralized coordination, unavailability remains bounded by a chosen numerical quantity at the cost that inconsistency becomes unbounded under network partitioning. Our centralized coordination mechanism is an extension of techniques that have historically been used for distributed simulation, an application where consistency is paramount. Our decentralized coordination mechanism is an extension of techniques that have been used in distributed databases when availability is paramount.
Motivated, in part, by the rise of permissionless systems such as Bitcoin where arbitrary nodes (whose identities are not known apriori) can join and leave at will, we extend established research in scalable Byzantine agreement to a more practical model where each node (initially) does not know the identity of other nodes. A node can send to new destinations only by sending to random (or arbitrary) nodes, or responding (if it chooses) to messages received from those destinations. We assume a synchronous and fully-connected network, with a full-information, but static Byzantine adversary. A general drawback of existing Byzantine protocols is that the communication cost incurred by the honest nodes may not be proportional to those incurred by the Byzantine nodes; in fact, they can be significantly higher. Our goal is to design Byzantine protocols for fundamental problems which are {em resource competitive}, i.e., the number of bits sent by honest nodes is not much more than those sent by Byzantine nodes. We describe a randomized scalable algorithm to solve Byzantine agreement, leader election, and committee election in this model. Our algorithm sends an expected $O((T+n)log n)$ bits and has latency $O(polylog(n))$, where $n$ is the number of nodes, and $T$ is the minimum of $n^2$ and the number of bits sent by adversarially controlled nodes. The algorithm is resilient to $(1/4-epsilon)n$ Byzantine nodes for any fixed $epsilon > 0$, and succeeds with high probability. Our work can be considered as a first application of resource-competitive analysis to fundamental Byzantine problems. To complement our algorithm we also show lower bounds for resource-competitive Byzantine agreement. We prove that, in general, one cannot hope to design Byzantine protocols that have communication cost that is significantly smaller than the cost of the Byzantine adversary.
Longest-chain blockchain protocols, such as Bitcoin, guarantee liveness even when the number of actively participating users is variable, i.e., they are adaptive. However, they are not safe under network partitions, i.e., they do not guarantee finality. On the other hand, classical blockchain protocols, like PBFT, achieve finality but not adaptivity. Indeed, the CAP theorem in the context of blockchains asserts that no protocol can simultaneously offer both adaptivity and finality. We propose a new blockchain protocol, called the checkpointed longest chain, that offers individual users the choice between finality and adaptivity instead of imposing it at a system-wide level. This protocols salient feature is that it supports two distinct confirmation rules: one that guarantees adaptivity and the other finality. The more optimistic adaptive rule always confirms blocks that are marked as finalized by the more conservative rule, and may possibly confirm more blocks during variable participation levels. Clients (users) make a local choice between the confirmation rules as per their personal preference, while miners follow a fixed block proposal rule that is consistent with both confirmation rules. The proposed protocol has the additional benefit of intrinsic validity: the finalized blocks always lie on a single blockchain, and therefore miners can attest to the validity of transactions while proposing blocks. Our protocol builds on the notion of a finality gadget, a popular technique for adding finality to longest-chain protocols.
A determinacy race occurs if two or more logically parallel instructions access the same memory location and at least one of them tries to modify its content. Races often lead to nondeterministic and incorrect program behavior. A data race is a special case of a determinacy race which can be eliminated by associating a mutual-exclusion lock or allowing atomic accesses to the memory location. However, such solutions can reduce parallelism by serializing all accesses to that location. For associative and commutative updates, reducers allow parallel race-free updates at the expense of using some extra space. We ask the following question. Given a fixed budget of extra space to mitigate the cost of races in a parallel program, which memory locations should be assigned reducers and how should the space be distributed among the reducers in order to minimize the overall running time? We argue that the races can be captured by a directed acyclic graph (DAG), with nodes representing memory cells and arcs representing read-write dependencies between cells. We then formulate our optimization problem on DAGs. We concentrate on a variation of this problem where space reuse among reducers is allowed by routing extra space along a source to sink path of the DAG and using it in the construction of reducers along the path. We consider two reducers and the corresponding duration functions (i.e., reduction time as a function of space budget). We generalize our race-avoiding space-time tradeoff problem to a discrete resource-time tradeoff problem with general non-increasing duration functions and resource reuse over paths. For general DAGs, the offline problem is strongly NP-hard under all three duration functions, and we give approximation algorithms. We also prove hardness of approximation for the general resource-time tradeoff problem and give a pseudo-polynomial time algorithm for series-parallel DAGs.
The celebrated 1999 Asynchronous Computability Theorem (ACT) of Herlihy and Shavit characterized the distributed tasks that are wait-free solvable, and thus uncovered a deep connection with algebraic topology. We present a novel interpretation of this theorem, through the notion of continuous task, defined by an input/output specification that is a continuous function. To do so, we introduce a chromatic version of a foundational result for algebraic topology: the simplicial approximation theorem. In addition to providing a different proof of the ACT, the notion of continuous task seems interesting in itself. Indeed, besides the fact that certain distributed problems are naturally specified by continuous functions, continuous tasks have an expressive power that also allows to specify the density of desired outputs for each combination of possible inputs,for example.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا