No Arabic abstract
Substantial efforts are invested in improving network security, but the threat landscape is rapidly evolving, particularly with the recent interest in programmable network hardware. We explore a new security threat, from an attacker who has gained control of such devices. While it should be obvious that such attackers can trivially cause substantial damage, the challenge and novelty are in doing so while preventing quick diagnosis by the operator. We find that compromised programmable devices can easily degrade networked applications by orders of magnitude, while evading diagnosis by even the most sophisticated network diagnosis methods in deployment. Two key observations yield this result: (a) targeting a small number of packets is often enough to cause disproportionate performance degradation; and (b) new programmable hardware is an effective enabler of careful, selective targeting of packets. Our results also point to recommendations for minimizing the damage from such attacks, ranging from known, easy to implement techniques like encryption and redundant requests, to more complex considerations that would potentially limit some intended uses of programmable hardware. For data center contexts, we also discuss application-aware monitoring and response as a potential mitigation.
802.11 device fingerprinting is the action of characterizing a target device through its wireless traffic. This results in a signature that may be used for identification, network monitoring or intrusion detection. The fingerprinting method can be active by sending traffic to the target device, or passive by just observing the traffic sent by the target device. Many passive fingerprinting methods rely on the observation of one particular network feature, such as the rate switching behavior or the transmission pattern of probe requests. In this work, we evaluate a set of global wireless network parameters with respect to their ability to identify 802.11 devices. We restrict ourselves to parameters that can be observed passively using a standard wireless card. We evaluate these parameters for two different tests: i) the identification test that returns one single result being the closest match for the target device, and ii) the similarity test that returns a set of devices that are close to the target devices. We find that the network parameters transmission time and frame inter-arrival time perform best in comparison to the other network parameters considered. Finally, we focus on inter-arrival times, the most promising parameter for device identification, and show its dependency from several device characteristics such as the wireless card and driver but also running applications.
Device-to-device (D2D) communication has seen as a major technology to overcome the imminent wireless capacity crunch and to enable new application services. In this paper, we propose a social-aware approach for optimizing D2D communication by exploiting two layers: the social network and the physical wireless layers. First we formulate the physical layer D2D network according to users encounter histories. Subsequently, we propose an approach, based on the so-called Indian Buffet Process, so as to model the distribution of contents in users online social networks. Given the social relations collected by the Evolved Node B (eNB), we jointly optimize the traffic offloading process in D2D communication. In addition, we give the Chernoff bound and approximated cumulative distribution function (CDF) of the offloaded traffic. In the simulation, we proved the effectiveness of the bound and CDF. The numerical results based on real traces show that the proposed approach offload the traffic of eNBs successfully.
System noise can negatively impact the performance of HPC systems, and the interconnection network is one of the main factors contributing to this problem. To mitigate this effect, adaptive routing sends packets on non-minimal paths if they are less congested. However, while this may mitigate interference caused by congestion, it also generates more traffic since packets traverse additional hops, causing in turn congestion on other applications and on the application itself. In this paper, we first describe how to estimate network noise. By following these guidelines, we show how noise can be reduced by using routing algorithms which select minimal paths with a higher probability. We exploit this knowledge to design an algorithm which changes the probability of selecting minimal paths according to the application characteristics. We validate our solution on microbenchmarks and real-world applications on two systems relying on a Dragonfly interconnection network, showing noise reduction and performance improvement.
Virtual reality (VR) is an emerging technology that enables new applications but also introduces privacy risks. In this paper, we focus on Oculus VR (OVR), the leading platform in the VR space, and we provide the first comprehensive analysis of personal data exposed by OVR apps and the platform itself, from a combined networking and privacy policy perspective. We experimented with the Quest 2 headset, and we tested the most popular VR apps available on the official Oculus and the SideQuest app stores. We developed OVRseen, a methodology and system for collecting, analyzing, and comparing network traffic and privacy policies on OVR. On the networking side, we captured and decrypted network traffic of VR apps, which was previously not possible on OVR, and we extracted data flows (defined as <app, data type, destination>). We found that the OVR ecosystem (compared to the mobile and other app ecosystems) is more centralized, and driven by tracking and analytics, rather than by third-party advertising. We show that the data types exposed by VR apps include personally identifiable information (PII), device information that can be used for fingerprinting, and VR-specific data types. By comparing the data flows found in the network traffic with statements made in the apps privacy policies, we discovered that approximately 70% of OVR data flows were not properly disclosed. Furthermore, we provided additional context for these data flows, including the purpose, which we extracted from the privacy policies, and observed that 69% were sent for purposes unrelated to the core functionality of apps.
Targeted attacks against network infrastructure are notoriously difficult to guard against. In the case of communication networks, such attacks can leave users vulnerable to censorship and surveillance, even when cryptography is used. Much of the existing work on network fault-tolerance focuses on random faults and does not apply to adversarial faults (attacks). Centralized networks have single points of failure by definition, leading to a growing popularity in decentralized architectures and protocols for greater fault-tolerance. However, centralized network structure can arise even when protocols are decentralized. Despite their decentralized protocols, the Internet and World-Wide Web have been shown both theoretically and historically to be highly susceptible to attack, in part due to emergent structural centralization. When single points of failure exist, they are potentially vulnerable to non-technological (i.e., coercive) attacks, suggesting the importance of a structural approach to attack-tolerance. We show how the assumption of partial trust transitivity, while more realistic than the assumption underlying webs of trust, can be used to quantify the effective redundancy of a network as a function of trust transitivity. We also prove that the effective redundancy of the wrap-around butterfly topology increases exponentially with trust transitivity and describe a novel concurrent multipath routing algorithm for constructing paths to utilize that redundancy. When portions of network structure can be dictated our results can be used to create scalable, attack-tolerant infrastructures. More generally, our results provide a theoretical formalism for evaluating the effects of network structure on adversarial fault-tolerance.