ترغب بنشر مسار تعليمي؟ اضغط هنا

A General Difficulty Control Algorithm for Proof-of-Work Based Blockchains

74   0   0.0 ( 0 )
 نشر من قبل Shulai Zhang
 تاريخ النشر 2020
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Designing an efficient difficulty control algorithm is an essential problem in Proof-of-Work (PoW) based blockchains because the network hash rate is randomly changing. This paper proposes a general difficulty control algorithm and provides insights for difficulty adjustment rules for PoW based blockchains. The proposed algorithm consists a two-layer neural network. It has low memory cost, meanwhile satisfying the fast-updating and low volatility requirements for difficulty adjustment. Real data from Ethereum are used in the simulations to prove that the proposed algorithm has better performance for the control of the block difficulty.



قيم البحث

اقرأ أيضاً

An important feature of Proof-of-Work (PoW) blockchains is full dynamic availability, allowing miners to go online and offline while requiring only 50% of the online miners to be honest. Existing Proof-of-stake (PoS), Proof-of-Space and related pro tocols are able to achieve this property only partially, either putting the additional assumption that adversary nodes to be online from the beginning and no new adversary nodes come online afterwards, or use additional trust assumptions for newly joining nodes.We propose a new PoS protocol PoSAT which can provably achieve dynamic availability fully without any additional assumptions. The protocol is based on the longest chain and uses a Verifiable Delay Function for the block proposal lottery to provide an arrow of time. The security analysis of the protocol draws on the recently proposed technique of Nakamoto blocks as well as the theory of branching random walks. An additional feature of PoSAT is the complete unpredictability of who will get to propose a block next, even by the winner itself. This unpredictability is at the same level of PoW protocols, and is stronger than that of existing PoS protocols using Verifiable Random Functions.
A blockchain is a database of sequential events that is maintained by a distributed group of nodes. A key consensus problem in blockchains is that of determining the next block (data element) in the sequence. Many blockchains address this by electing a new node to propose each new block. The new block is (typically) appended to the tip of the proposers local blockchain, and subsequently broadcast to the rest of the network. Without network delay (or adversarial behavior), this procedure would give a perfect chain, since each proposer would have the same view of the blockchain. A major challenge in practice is forking. Due to network delays, a proposer may not yet have the most recent block, and may, therefore, create a side chain that branches from the middle of the main chain. Forking reduces throughput, since only one a single main chain can survive, and all other blocks are discarded. We propose a new P2P protocol for blockchains called Barracuda, in which each proposer, prior to proposing a block, polls $ell$ other nodes for their local blocktree information. Under a stochastic network model, we prove that this lightweight primitive improves throughput as if the entire network were a factor of $ell$ faster. We provide guidelines on how to implement Barracuda in practice, guaranteeing robustness against several real-world factors.
The blockchain data structure maintained via the longest-chain rule---popularized by Bitcoin---is a powerful algorithmic tool for consensus algorithms. Such algorithms achieve consistency for blocks in the chain as a function of their depth from the end of the chain. While the analysis of Bitcoin guarantees consistency with error $2^{-k}$ for blocks of depth $O(k)$, the state-of-the-art of proof-of-stake (PoS) blockchains suffers from a quadratic dependence on $k$: these protocols, exemplified by Ouroboros (Crypto 2017), Ouroboros Praos (Eurocrypt 2018) and Sleepy Consensus (Asiacrypt 2017), can only establish that depth $Theta(k^2)$ is sufficient. Whether this quadratic gap is an intrinsic limitation of PoS---due to issues such as the nothing-at-stake problem---has been an urgent open question, as deployed PoS blockchains further rely on consistency for protocol correctness. We give an axiomatic theory of blockchain dynamics that permits rigorous reasoning about the longest-chain rule and achieve, in broad generality, $Theta(k)$ dependence on depth in order to achieve consistency error $2^{-k}$. In particular, for the first time, we show that PoS protocols can match proof-of-work protocols for linear consistency. We analyze the associated stochastic process, give a recursive relation for the critical functionals of this process, and derive tail bounds in both i.i.d. and martingale settings via associated generating functions.
We study efficiency in a proof-of-work blockchain with non-zero latencies, focusing in particular on the (inequality in) individual miners efficiencies. Prior work attributed differences in miners efficiencies mostly to attacks, but we pursue a diffe rent question: Can inequality in miners efficiencies be explained by delays, even when all miners are honest? Traditionally, such efficiency-related questions were tackled only at the level of the overall system, and in a peer-to-peer (P2P) setting where miners directly connect to one another. Despite it being common today for miners to pool compute capacities in a mining pool managed by a centralized coordinator, efficiency in such a coordinated setting has barely been studied. In this paper, we propose a simple model of a proof-of-work blockchain with latencies for both the P2P and the coordinated settings. We derive a closed-form expression for the efficiency in the coordinated setting with an arbitrary number of miners and arbitrary latencies, both for the overall system and for each individual miner. We leverage this result to show that inequalities arise from variability in the delays, but that if all miners are equidistant from the coordinator, they have equal efficiency irrespective of their compute capacities. We then prove that, under a natural consistency condition, the overall system efficiency in the P2P setting is higher than that in the coordinated setting. Finally, we perform a simulation-based study to demonstrate that even in the P2P setting delays between miners introduce inequalities, and that there is a more complex interplay between delays and compute capacities.
84 - Swanand Kadhe , Jichan Chung , 2019
Full nodes, which synchronize the entire blockchain history and independently validate all the blocks, form the backbone of any blockchain network by playing a vital role in ensuring security properties. On the other hand, a user running a full node needs to pay a heavy price in terms of storage costs. E.g., the Bitcoin blockchain size has grown over 215GB, in spite of its low throughput. The ledger size for a high throughput blockchain Ripple has already reached 9TB, and it is growing at an astonishing rate of 12GB per day! In this paper, we propose an architecture based on fountain codes, a class of erasure codes, that enables any full node to encode validated blocks into a small number of coded blocks, thereby reducing its storage costs by orders of magnitude. In particular, our proposed Secure Fountain (SeF) architecture can achieve a near-optimal trade-off between the storage savings per node and the bootstrap cost in terms of the number of (honest) storage-constrained nodes a new node needs to contact to recover the blockchain. A key technical innovation in SeF codes is to make fountain codes secure against adversarial nodes that can provide maliciously formed coded blocks. Our idea is to use the header-chain as a side-information to check whether a coded block is maliciously formed while it is getting decoded. Further, the rateless property of fountain codes helps in achieving high decentralization and scalability. Our experiments demonstrate that SeF codes tuned to achieve 1000x storage savings enable full nodes to encode the 191GB Bitcoin blockchain into 195MB on average. A new node can recover the blockchain from an arbitrary set of storage-constrained nodes as long as the set contains ~1100 honest nodes on average. Note that for a 1000x storage savings, the fundamental bound on the number of honest nodes to contact is 1000: we need about 10% more in practice.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا