ﻻ يوجد ملخص باللغة العربية
Sharding distributed ledgers is the most promising on-chain solution for scaling blockchain technology. In this work, we define and analyze the properties a sharded distributed ledger should fulfill. More specifically, we show that a sharded blockchain cannot be scalable under a fully adaptive adversary, but it can scale up to $O(n/log n)$ under an epoch-adaptive adversary. This is possible only if the distributed ledger creates succinct proofs of the valid state updates at the end of each epoch. Our model builds upon and extends the Bitcoin backbone protocol by defining consistency and scalability. Consistency encompasses the need for atomic execution of cross-shard transactions to preserve safety, whereas scalability encapsulates the speedup a sharded system can gain in comparison to a non-sharded system. We introduce a protocol abstraction and highlight the sufficient components for secure and efficient sharding in our model. In order to show the power of our framework, we analyze the most prominent shared blockchains (Elastico, Monoxide, OmniLedger, RapidChain) and pinpoint where they fail to meet the desired properties.
A blockchain and smart contract enabled security mechanism for IoT applications has been reported recently for urban, financial, and network services. However, due to the power-intensive and a low-throughput consensus mechanism in existing blockchain
In this paper we present the initial design of Minerva consensus protocol for Truechain and other technical details. Currently, it is widely believed in the blockchain community that a public chain cannot simultaneously achieve high performance, dece
Existing blockchain systems scale poorly because of their distributed consensus protocols. Current attempts at improving blockchain scalability are limited to cryptocurrency. Scaling blockchain systems under general workloads (i.e., non-cryptocurrenc
Cryptocurrencies, implemented with blockchain protocols, promise to become a global payment system if they can overcome performance limitations. Rapidly advancing architectures improve on latency and throughput, but most require all participating ser
Scale of data and scale of computation infrastructures together enable the current deep learning renaissance. However, training large-scale deep architectures demands both algorithmic improvement and careful system configuration. In this paper, we fo