No Arabic abstract
The Good is Blondie, a wandering gunman with a strong personal sense of honor. The Bad is Angel Eyes, a sadistic hitman who always hits his mark. The Ugly is Tuco, a Mexican bandit whos always only looking out for himself. Against the backdrop of the BOWS contest, they search for a watermark in gold buried in three images. Each knows only a portion of the golds exact location, so for the moment theyre dependent on each other. However, none are particularly inclined to share...
DNS has always been criticized for its inherent design flaws, making the system vulnerable to kinds of attacks. Besides, DNS domain names are not fully controlled by the users, which can be easily taken down by the authorities and registrars. Since blockchain has its unique properties like immutability and decentralization, it seems to be promising to build a decentralized name service on blockchain. Ethereum Name Service (ENS), as a novel name service built atop Etheruem, has received great attention from the community. Yet, no existing work has systematically studied this emerging system, especially the security issues and misbehaviors in ENS. To fill the void, we present the first large-scale study of ENS by collecting and analyzing millions of event logs related to ENS. We characterize the ENS system from a number of perspectives. Our findings suggest that ENS is showing gradually popularity during its four years evolution, mainly due to its distributed and open nature that ENS domain names can be set to any kinds of records, even censored and malicious contents. We have identified several security issues and misbehaviors including traditional DNS security issues and new issues introduced by ENS smart contracts. Attackers are abusing the system with thousands of squatting ENS names, a number of scam blockchain addresses and malicious websites, etc. Our exploration suggests that our community should invest more effort into the detection and mitigation of issues in Blockchain-based Name Services towards building an open and trustworthy name service.
There has been considerable progress in the design and construction of quantum annealing devices. However, a conclusive detection of quantum speedup over traditional silicon-based machines remains elusive, despite multiple careful studies. In this work we outline strategies to design hard tunable benchmark instances based on insights from the study of spin glasses - the archetypal random benchmark problem for novel algorithms and optimization devices. We propose to complement head-to-head scaling studies that compare quantum annealing machines to state-of-the-art classical codes with an approach that compares the performance of different algorithms and/or computing architectures on different classes of computationally hard tunable spin-glass instances. The advantage of such an approach lies in having to only compare the performance hit felt by a given algorithm and/or architecture when the instance complexity is increased. Furthermore, we propose a methodology that might not directly translate into the detection of quantum speedup, but might elucidate whether quantum annealing has a `quantum advantage over corresponding classical algorithms like simulated annealing. Our results on a 496 qubit D-Wave Two quantum annealing device are compared to recently-used state-of-the-art thermal simulated annealing codes.
We examine three approaches to the problem of source classification in catalogues. Our goal is to determine the confidence with which the elements in these catalogues can be distinguished in populations on the basis of their spectral energy distribution (SED). Our analysis is based on the projection of the measurements onto a comprehensive SED model of the main signals in the considered range of frequencies. We first first consider likelihood analysis, which half way between supervised and unsupervised methods. Next, we investigate an unsupervised clustering technique. Finally, we consider a supervised classifier based on Artificial Neural Networks. We illustrate the approach and results using catalogues from various surveys. i.e., X-Rays (MCXC), optical (SDSS) and millimetric (Planck Sunyaev-Zeldovich (SZ)). We show that the results from the statistical classifications of the three methods are in very good agreement with each others, although the supervised neural network-based classification shows better performances allowing the best separation into populations of reliable and unreliable sources in catalogues. The latest method was applied to the SZ sources detected by the Planck satellite. It led to a classification assessing and thereby agreeing with the reliability assessment published in the Planck SZ catalogue. Our method could easily be applied to catalogues from future large survey such as SRG/eROSITA and Euclid.
We address a non-unique parameter fitting problem in the context of material science. In particular, we propose to resolve ambiguities in parameter space by augmenting a black-box artificial neural network (ANN) model with two different levels of expert knowledge and benchmark them against a pure black-box model.
Ethereum smart contracts are distributed programs running on top of the Ethereum blockchain. Since program flaws can cause significant monetary losses and can hardly be fixed due to the immutable nature of the blockchain, there is a strong need of automated analysis tools which provide formal security guarantees. Designing such analyzers, however, proved to be challenging and error-prone. We review the existing approaches to automated, sound, static analysis of Ethereum smart contracts and highlight prevalent issues in the state of the art. Finally, we overview eThor, a recent static analysis tool that we developed following a principled design and implementation approach based on rigorous semantic foundations to overcome the problems of past works.