No Arabic abstract
In late 2017, a sudden proliferation of malicious JavaScript was reported on the Web: browser-based mining exploited the CPU time of website visitors to mine the cryptocurrency Monero. Several studies measured the deployment of such code and developed defenses. However, previous work did not establish how many users were really exposed to the identified mining sites and whether there was a real risk given common user browsing behavior. In this paper, we present a retroactive analysis to close this research gap. We pool large-scale, longitudinal data from several vantage points, gathered during the prime time of illicit cryptomining, to measure the impact on web users. We leverage data from passive traffic monitoring of university networks and a large European ISP, with suspected mining sites identified in previous active scans. We corroborate our results with data from a browser extension with a large user base that tracks site visits. We also monitor open HTTP proxies and the Tor network for malicious injection of code. We find that the risk for most Web users was always very low, much lower than what deployment scans suggested. Any exposure period was also very brief. However, we also identify a previously unknown and exploited attack vector on mobile devices.
Web is a primary and essential service to share information among users and organizations at present all over the world. Despite the current significance of such a kind of traffic on the Internet, the so-called Surface Web traffic has been estimated in just about 5% of the total. The rest of the volume of this type of traffic corresponds to the portion of Web known as Deep Web. These contents are not accessible by search engines because they are authentication protected contents or pages that are only reachable through the well known as darknets. To browse through darknets websites special authorization or specific software and configurations are needed. Despite TOR is the most used darknet nowadays, there are other alternatives such as I2P or Freenet, which offer different features for end users. In this work, we perform an analysis of the connectivity of websites in the I2P network (named eepsites) aimed to discover if different patterns and relationships from those used in legacy web are followed in I2P, and also to get insights about its dimension and structure. For that, a novel tool is specifically developed by the authors and deployed on a distributed scenario. Main results conclude the decentralized nature of the I2P network, where there is a structural part of interconnected eepsites while other several nodes are isolated probably due to their intermittent presence in the network.
To accommodate the explosive growth of the Internet-of-Things (IoT), incorporating interference alignment (IA) into existing multiple access (MA) schemes is under investigation. However, when it is applied in MIMO networks to improve the system compacity, the incoming problem regarding information delay arises which does not meet the requirement of low-latency. Therefore, in this paper, we first propose a new metric, degree of delay (DoD), to quantify the issue of information delay, and characterize DoD for three typical transmission schemes, i.e., TDMA, beamforming based TDMA (BD-TDMA), and retrospective interference alignment (RIA). By analyzing DoD in these schemes, its value mainly depends on three factors, i.e., delay sensitive factor, size of data set, and queueing delay slot. The first two reflect the relationship between quality of service (QoS) and information delay sensitivity, and normalize time cost for each symbol, respectively. These two factors are independent of the transmission schemes, and thus we aim to reduce the queueing delay slot to improve DoD. Herein, three novel joint IA schemes are proposed for MIMO downlink networks with different number of users. That is, hybrid antenna array based partial interference elimination and retrospective interference regeneration scheme (HAA-PIE-RIR), HAA based improved PIE and RIR scheme (HAA-IPIE-RIR), and HAA based cyclic interference elimination and RIR scheme (HAA-CIE-RIR). Based on the first scheme, the second scheme extends the application scenarios from $2$-user to $K$-user while causing heavy computational burden. The third scheme relieves such computational burden, though it has certain degree of freedom (DoF) loss due to insufficient utilization of space resources.
We performed the first systematic study of a new attack on Ethereum that steals cryptocurrencies. The attack is due to the unprotected JSON-RPC endpoints existed in Ethereum nodes that could be exploited by attackers to transfer the Ether and ERC20 tokens to attackers-controlled accounts. This study aims to shed light on the attack, including malicious behaviors and profits of attackers. Specifically, we first designed and implemented a honeypot that could capture real attacks in the wild. We then deployed the honeypot and reported results of the collected data in a period of six months. In total, our system captured more than 308 million requests from 1,072 distinct IP addresses. We further grouped attackers into 36 groups with 59 distinct Ethereum accounts. Among them, attackers of 34 groups were stealing the Ether, while other 2 groups were targeting ERC20 tokens. The further behavior analysis showed that attackers were following a three-steps pattern to steal the Ether. Moreover, we observed an interesting type of transaction called zero gas transaction, which has been leveraged by attackers to steal ERC20 tokens. At last, we estimated the overall profits of attackers. To engage the whole community, the dataset of captured attacks is released on https://github.com/zjuicsr/eth-honey.
This retrospective paper describes the RowHammer problem in Dynamic Random Access Memory (DRAM), which was initially introduced by Kim et al. at the ISCA 2014 conference~cite{rowhammer-isca2014}. RowHammer is a prime (and perhaps the first) example of how a circuit-level failure mechanism can cause a practical and widespread system security vulnerability. It is the phenomenon that repeatedly accessing a row in a modern DRAM chip causes bit flips in physically-adjacent rows at consistently predictable bit locations. RowHammer is caused by a hardware failure mechanism called {em DRAM disturbance errors}, which is a manifestation of circuit-level cell-to-cell interference in a scaled memory technology. Researchers from Google Project Zero demonstrated in 2015 that this hardware failure mechanism can be effectively exploited by user-level programs to gain kernel privileges on real systems. Many other follow-up works demonstrated other practical attacks exploiting RowHammer. In this article, we comprehensively survey the scientific literature on RowHammer-based attacks as well as mitigation techniques to prevent RowHammer. We also discuss what other related vulnerabilities may be lurking in DRAM and other types of memories, e.g., NAND flash memory or Phase Change Memory, that can potentially threaten the foundations of secure systems, as the memory technologies scale to higher densities. We conclude by describing and advocating a principled approach to memory reliability and security research that can enable us to better anticipate and prevent such vulnerabilities.
The last decade has experienced a vast interest in Blockchain-based cryptocurrencies with a specific focus on the applications of this technology. However, slow confirmation times of transactions and unforeseeable high fees hamper their wide adoption for micro-payments. The idea of establishing payment channel networks is one of the many proposed solutions to address this scalability issue where nodes, by utilizing smart contracting, establish payment channels between each other and perform off-chain transactions. However, due to the way these channels are created, both sides have a certain one-way capacity for making transactions. Consequently, if one sides exceeds this one-way capacity, the channel becomes useless in that particular direction, which causes failures of payments and eventually creates an imbalance in the overall network. To keep the payment channel network sustainable, in this paper, we aim to increase the overall success rate of payments by effectively exploiting the fact that end-users are usually connected to the network at multiple points (i.e., gateways) any of which can be used to initiate the payment. We propose an efficient method for selection of the gateway for a user by considering the gateways inbound and outbound payment traffic ratio. We then augment this proposed method with split payment capability to further increase success rate especially for large transactions. The evaluation of the proposed method shows that compared to greedy and maxflow-based approaches, we can achieve much higher success rates, which are further improved with split payments.