Do you want to publish a course? Click here

From Innovations to Prospects: What Is Hidden Behind Cryptocurrencies?

430   0   0.0 ( 0 )
 Added by Ang Jia
 Publication date 2021
and research's language is English




Ask ChatGPT about the research

The great influence of Bitcoin has promoted the rapid development of blockchain-based digital currencies, especially the altcoins, since 2013. However, most altcoins share similar source codes, resulting in concerns about code innovations. In this paper, an empirical study on existing altcoins is carried out to offer a thorough understanding of various aspects associated with altcoin innovations. Firstly, we construct the dataset of altcoins, including source code repositories, GitHub fork relations, and market capitalizations (cap). Then, we analyze the altcoin innovations from the perspective of source code similarities. The results demonstrate that more than 85% of altcoin repositories present high code similarities. Next, a temporal clustering algorithm is proposed to mine the inheritance relationship among various altcoins. The family pedigrees of altcoin are constructed, in which the altcoin presents similar evolution features as biology, such as power-law in family size, variety in family evolution, etc. Finally, we investigate the correlation between code innovations and market capitalization. Although we fail to predict the price of altcoins based on their code similarities, the results show that altcoins with higher innovations reflect better market prospects.



rate research

Read More

We show that coalescence of nucleons emitted prior to thermalization in highly excited nuclei can explain the anomaly of kinetic energies of helium clusters. A new coalescence algorithm has been included into the statistical approach to nuclear reactions formerly used to describe intermediate mass fragment production.
Different studies have reported a power-law mass-size relation $M propto R^q$ for ensembles of molecular clouds. In the case of nearby clouds, the index of the power-law $q$ is close to 2. However, for clouds spread all over the Galaxy, indexes larger than 2 are reported. We show that indexes larger than 2 could be the result of line-of-sight superposition of emission that does not belong to the cloud itself. We found that a random factor of gas contamination, between 0.001% and 10% of the line-of-sight, allows to reproduce the mass-size relation with $q sim 2.2-2.3$ observed in Galactic CO surveys. Furthermore, for dense cores within a single cloud, or molecular clouds within a single galaxy, we argue that, even in these cases, there is observational and theoretical evidence that some degree of superposition may be occurring. However, additional effects may be present in each case, and are briefly discussed. We also argue that defining the fractal dimension of clouds via the mass-size relation is not adequate, since the mass is not {necessarily} a proxy to the area, and the size reported in $M-R$ relations is typically obtained from the square root of the area, rather than from an estimation of the size independent from the area. Finally, we argue that the statistical analysis of finding clouds satisfying the Larsons relations does not mean that each individual cloud is in virial equilibrium.
249 - Jesus Zavala Ruiz 2019
Software engineering (SE) undergoes an ontological crisis and it lacks of a theory. Why? Among other reasons, because always it succumbed to the pragmatism demanded by the commercial and political interests and abandoned any intention to become a science instead of a professional discipline. For beginning a discussion for define a theory of software, first, is required to know what software is.
Software systems have been continuously evolved and delivered with high quality due to the widespread adoption of automated tests. A recurring issue hurting this scenario is the presence of flaky tests, a test case that may pass or fail non-deterministically. A promising, but yet lacking more empirical evidence, approach is to collect static data of automated tests and use them to predict their flakiness. In this paper, we conducted an empirical study to assess the use of code identifiers to predict test flakiness. To do so, we first replicate most parts of the previous study of Pinto~et~al.~(MSR~2020). This replication was extended by using a different ML Python platform (Scikit-learn) and adding different learning algorithms in the analyses. Then, we validated the performance of trained models using datasets with other flaky tests and from different projects. We successfully replicated the results of Pinto~et~al.~(2020), with minor differences using Scikit-learn; different algorithms had performance similar to the ones used previously. Concerning the validation, we noticed that the recall of the trained models was smaller, and classifiers presented a varying range of decreases. This was observed in both intra-project and inter-projects test flakiness prediction.
We study the constraints imposed by perturbative unitarity on the new physics interpretation of the muon $g-2$ anomaly. Within a Standard Model Effective Field Theory (SMEFT) approach, we find that scattering amplitudes sourced by effective operators saturate perturbative unitarity at about 1 PeV. This corresponds to the highest energy scale that needs to be probed in order to resolve the new physics origin of the muon $g-2$ anomaly. On the other hand, simplified models (e.g.~scalar-fermion Yukawa theories) in which renormalizable couplings are pushed to the boundary of perturbativity still imply new on-shell states below 200 TeV. We finally suggest that the highest new physics scale responsible for the anomalous effect can be reached in non-renormalizable models at the PeV scale.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا