No Arabic abstract
Eliminating unnecessary exposure is a principle of server security. The huge IPv6 address space enhances security by making scanning infeasible, however, with recent advances of IPv6 scanning technologies, network scanning is again threatening server security. In this paper, we propose a new model named addressless server, which separates the server into an entrance module and a main service module, and assigns an IPv6 prefix instead of an IPv6 address to the main service module. The entrance module generates a legitimate IPv6 address under this prefix by encrypting the client address, so that the client can access the main server on a destination address that is different in each connection. In this way, the model provides isolation to the main server, prevents network scanning, and minimizes exposure. Moreover it provides a novel framework that supports flexible load balancing, high-availability, and other desirable features. The model is simple and does not require any modification to the client or the network. We implement a prototype and experiments show that our model can prevent the main server from being scanned at a slight performance cost.
The growing size of data center and HPC networks pose unprecedented requirements on the scalability of simulation infrastructure. The ability to simulate such large-scale interconnects on a simple PC would facilitate research efforts. Unfortunately, as we first show in this work, existing shared-memory packet-level simulators do not scale to the sizes of the largest networks considered today. We then illustrate a feasibility analysis and a set of enhancements that enable a simple packet-level htsim simulator to scale to the unprecedented simulation sizes on a single PC. Our code is available online and can be used to design novel schemes in the coming era of omnipresent data centers and HPC clusters.
To keep up with demand, servers will scale up to handle hundreds of thousands of clients simultaneously. Much of the focus of the community has been on scaling servers in terms of aggregate traffic intensity (packets transmitted per second). However, bottlenecks caused by the increasing number of concurrent clients, resulting in a large number of concurrent flows, have received little attention. In this work, we focus on identifying such bottlenecks. In particular, we define two broad categories of problems; namely, admitting more packets into the network stack than can be handled efficiently, and increasing per-packet overhead within the stack. We show that these problems contribute to high CPU usage and network performance degradation in terms of aggregate throughput and RTT. Our measurement and analysis are performed in the context of the Linux networking stack, the the most widely used publicly available networking stack. Further, we discuss the relevance of our findings to other network stacks. The goal of our work is to highlight considerations required in the design of future networking stacks to enable efficient handling of large numbers of clients and flows.
Detecting the anomaly behaviors such as network failure or Internet intentional attack in the large-scale Internet is a vital but challenging task. While numerous techniques have been developed based on Internet traffic in past years, anomaly detection for structured datasets by complex network have just been of focus recently. In this paper, a anomaly detection method for large-scale Internet topology is proposed by considering the changes of network crashes. In order to quantify the dynamic changes of Internet topology, the network path changes coefficient(NPCC) is put forward which will highlight the Internet abnormal state after it is attacked continuously. Furthermore we proposed the decision function which is inspired by Fibonacci Sequence to determine whether the Internet is abnormal or not. That is the current Internet is abnormal if its NPCC is beyond the normal domain which structured by the previous k NPCCs of Internet topology. Finally the new Internet anomaly detection method was tested over the topology data of three Internet anomaly events. The results show that the detection accuracy of all events are over 97%, the detection precision of each event are 90.24%, 83.33% and 66.67%, when k = 36. According to the experimental values of the index F_1, we found the the better the detection performance is, the bigger the k is, and our method has better performance for the anomaly behaviors caused by network failure than that caused by intentional attack. Compared with traditional anomaly detection, our work may be more simple and powerful for the government or organization in items of detecting large-scale abnormal events.
Disasters lead to devastating structural damage not only to buildings and transport infrastructure, but also to other critical infrastructure, such as the power grid and communication backbones. Following such an event, the availability of minimal communication services is however crucial to allow efficient and coordinated disaster response, to enable timely public information, or to provide individuals in need with a default mechanism to post emergency messages. The Internet of Things consists in the massive deployment of heterogeneous devices, most of which battery-powered, and interconnected via wireless network interfaces. Typical IoT communication architectures enables such IoT devices to not only connect to the communication backbone (i.e. the Internet) using an infrastructure-based wireless network paradigm, but also to communicate with one another autonomously, without the help of any infrastructure, using a spontaneous wireless network paradigm. In this paper, we argue that the vast deployment of IoT-enabled devices could bring benefits in terms of data network resilience in face of disaster. Leveraging their spontaneous wireless networking capabilities, IoT devices could enable minimal communication services (e.g. emergency micro-message delivery) while the conventional communication infrastructure is out of service. We identify the main challenges that must be addressed in order to realize this potential in practice. These challenges concern various technical aspects, including physical connectivity requirements, network protocol stack enhancements, data traffic prioritization schemes, as well as social and political aspects.
From biosystem to complex system,the study of life is always an important area. Inspired by hyper-cycle theory about the evolution of non-life system, we study the metabolism, self-replication and mutation behavior in the Internet based on node entity, connection relationship and function subgraph--motif--of network topology. Firstly a framework of complex network evolution is proposed to analyze the birth and death phenomena of Internet topology from January 1998 to August 2013. Then we find the Internet metabolism behavior from angle of node, motif to global topology, i.e. one born node is only added into Internet, subsequently takes part in the local reconstruction activities. Meanwhile there are nodes and motifs death. In process of the local reconstruction, although the Internet system replicates motifs repeatedly by adding or removing actions, the system characteristics and global structure are not destroyed. Statistics about the motif M3 which is a full connectivity subgraph shows that the process of its metabolism is fluctuation that causes mutation of Internet. Furthermore we find that mutation is instinctive reaction of Internet when its influenced from inside or outside environment, such as Internet bubble, social network rising and finance crisis. The behaviors of metabolism, self-replication and mutation of Internet indicate its life characteristic as a complex artificial life. And our work will inspire people to study the life-like phenomena of other complex systems from angle of topology structure.