ترغب بنشر مسار تعليمي؟ اضغط هنا

Identifying Impacts of Protocol and Internet Development on the Bitcoin Network

371   0   0.0 ( 0 )
 نشر من قبل Kazuyuki Shudo
 تاريخ النشر 2019
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Improving transaction throughput is an important challenge for Bitcoin. However, shortening the block generation interval or increasing the block size to improve throughput makes it sharing blocks within the network slower and increases the number of orphan blocks. Consequently, the security of the blockchain is sacrificed. To mitigate this, it is necessary to reduce the block propagation delay. Because of the contribution of new Bitcoin protocols and the improvements of the Internet, the block propagation delay in the Bitcoin network has been shortened in recent years. In this study, we identify impacts of compact block relay---an up-to-date Bitcoin protocol---and Internet improvement on the block propagation delay and fork rate in the Bitcoin network from 2015 to 2019. Existing measurement studies could not identify them but our simulation enables it. The experimental results reveal that compact block relay contributes to shortening the block propagation delay more than Internet improvements. The block propagation delay is reduced by 64.5% for the 50th percentile and 63.7% for the 90th percentile due to Internet improvements, and by 90.1% for the 50th percentile and by 87.6% for the 90th percentile due to compact block relay.



قيم البحث

اقرأ أيضاً

Due to the pseudo-anonymity of the Bitcoin network, users can hide behind their bitcoin addresses that can be generated in unlimited quantity, on the fly, without any formal links between them. Thus, it is being used for payment transfer by the actor s involved in ransomware and other illegal activities. The other activity we consider is related to gambling since gambling is often used for transferring illegal funds. The question addressed here is that given temporally limited graphs of Bitcoin transactions, to what extent can one identify common patterns associated with these fraudulent activities and apply them to find other ransomware actors. The problem is rather complex, given that thousands of addresses can belong to the same actor without any obvious links between them and any common pattern of behavior. The main contribution of this paper is to introduce and apply new algorithms for local clustering and supervised graph machine learning for identifying malicious actors. We show that very local subgraphs of the known such actors are sufficient to differentiate between ransomware, random and gambling actors with 85% prediction accuracy on the test data set.
This is the first paper to address the topology structure of Job Edge-Fog interconnection network in the perspective of network creation game. A two level network creation game model is given, in which the first level is similar to the traditional ne twork creation game with total length objective to other nodes. The second level adopts two types of cost functions, one is created based on the Jackson-Wolinsky type of distance based utility, another is created based on the Network-Only Cost in the IoT literature. We show the performance of this two level game (Price of Anarchy). This work discloses how the selfish strategies of each individual device can influence the global topology structure of the job edge-fog interconnection network and provides theoretical foundations of the IoT infrastructure construction. A significant advantage of this framework is that it can avoid solving the traditional expensive and impractical quadratic assignment problem, which was the typical framework to study this task. Furthermore, it can control the systematic performance based only on one or two cost parameters of the job edge-fog networks, independently and in a distributed way.
We study a process of emph{averaging} in a distributed system with emph{noisy communication}. Each of the agents in the system starts with some value and the goal of each agent is to compute the average of all the initial values. In each round, one p air of agents is drawn uniformly at random from the whole population, communicates with each other and each of these two agents updates their local value based on their own value and the received message. The communication is noisy and whenever an agent sends any value $v$, the receiving agent receives $v+N$, where $N$ is a zero-mean Gaussian random variable. The two quality measures of interest are (i) the total sum of squares $TSS(t)$, which measures the sum of square distances from the average load to the emph{initial average} and (ii) $bar{phi}(t)$, measures the sum of square distances from the average load to the emph{running average} (average at time $t$). It is known that the simple averaging protocol---in which an agent sends its current value and sets its new value to the average of the received value and its current value---converges eventually to a state where $bar{phi}(t)$ is small. It has been observed that $TSS(t)$, due to the noise, eventually diverges and previous research---mostly in control theory---has focused on showing eventual convergence w.r.t. the running average. We obtain the first probabilistic bounds on the convergence time of $bar{phi}(t)$ and precise bounds on the drift of $TSS(t)$ that show that albeit $TSS(t)$ eventually diverges, for a wide and interesting range of parameters, $TSS(t)$ stays small for a number of rounds that is polynomial in the number of agents. Our results extend to the synchronous setting and settings where the agents are restricted to discrete values and perform rounding.
81 - Bo Fang , Daoce Wang , Sian Jin 2021
In recent years, the increasing complexity in scientific simulations and emerging demands for training heavy artificial intelligence models require massive and fast data accesses, which urges high-performance computing (HPC) platforms to equip with m ore advanced storage infrastructures such as solid-state disks (SSDs). While SSDs offer high-performance I/O, the reliability challenges faced by the HPC applications under the SSD-related failures remains unclear, in particular for failures resulting in data corruptions. The goal of this paper is to understand the impact of SSD-related faults on the behaviors of complex HPC applications. To this end, we propose FFIS, a FUSE-based fault injection framework that systematically introduces storage faults into the application layer to model the errors originated from SSDs. FFIS is able to plant different I/O related faults into the data returned from underlying file systems, which enables the investigation on the error resilience characteristics of the scientific file format. We demonstrate the use of FFIS with three representative real HPC applications, showing how each application reacts to the data corruptions, and provide insights on the error resilience of the widely adopted HDF5 file format for the HPC applications.
Bitcoin is the first fully decentralized permissionless blockchain protocol and achieves a high level of security: the ledger it maintains has guaranteed liveness and consistency properties as long as the adversary has less compute power than the hon est nodes. However, its throughput is only 7 transactions per second and the confirmation latency can be up to hours. Prism is a new blockchain protocol which is designed to achieve a natural scaling of Bitcoins performance while maintaining its full security guarantees. We present an implementation of Prism which achieves a throughput of 70,000 transactions per second and confirmation latencies of tens of seconds.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا