Data transfer is one of the main functions of the Internet. The Internet consists of a large number of interconnected subnetworks or domains, known as Autonomous Systems. Due to privacy and other reasons the information about what route to use to rea
ch devices within other Autonomous Systems is not readily available to any given Autonomous System. The Border Gateway Protocol is responsible for discovering and distributing this reachability information to all Autonomous Systems. Since the topology of the Internet is highly dynamic, all Autonomous Systems constantly exchange and update this reachability information in small chunks, known as routing control packets or Border Gateway Protocol updates. Motivated by scalability and predictability issues with the dynamics of these updates in the quickly growing Internet, we conduct a systematic time series analysis of Border Gateway Protocol update rates. We find that Border Gateway Protocol update time series are extremely volatile, exhibit long-term correlations and memory effects, similar to seismic time series, or temperature and stock market price fluctuations. The presented statistical characterization of Border Gateway Protocol update dynamics could serve as a ground truth for validation of existing and developing better models of Internet interdomain routing.
As network size continues to grow exponentially, there has been a proportionate increase in the number of nodes in the corresponding network. With the advent of Internet of things (IOT), it is assumed that many more devices will be connected to the e
xisting network infrastructure. As a result, monitoring is expected to get more complex for administrators as networks tend to become more heterogeneous. Moreover, the addressing for IOTs would be more complex given the scale at which devices will be added to the network and hence monitoring is bound to become an uphill task due to management of larger range of addresses. This paper will throw light on what kind of monitoring mechanisms can be deployed in internet of things (IOTs) and their overall effectiveness.
We consider a communication system in which status updates arrive at a source node, and should be transmitted through a network to the intended destination node. The status updates are samples of a random process under observation, transmitted as pac
kets, which also contain the time stamp to identify when the sample was generated. The age of the information available to the destination node is the time elapsed since the last received update was generated. In this paper, we model the source-destination link using queuing theory, and we assume that the time it takes to successfully transmit a packet to the destination is an exponentially distributed service time. We analyze the age of information in the case that the source node has the capability to manage the arriving samples, possibly discarding packets in order to avoid wasting network resources with the transmission of stale information. In addition to characterizing the average age, we propose a new metric, called peak age, which provides information about the maximum value of the age, achieved immediately before receiving an update.
In complex networks, the failure of one or very few nodes may cause cascading failures. When this dynamical process stops in steady state, the size of the giant component formed by remaining un-failed nodes can be used to measure the severity of casc
ading failures, which is critically important for estimating the robustness of networks. In this paper, we provide a cascade of overload failure model with local load sharing mechanism, and then explore the threshold of node capacity when the large-scale cascading failures happen and un-failed nodes in steady state cannot connect to each other to form a large connected sub-network. We get the theoretical derivation of this threshold in degree-degree uncorrelated networks, and validate the effectiveness of this method in simulation. This threshold provide us a guidance to improve the network robustness under the premise of limited capacity resource when creating a network and assigning load. Therefore, this threshold is useful and important to analyze the robustness of networks.
The emergence of OpenFlow and Software Defined Networks brings new perspectives into how we design the next generation networks, where the number of base stations/access points as well as the devices per subscriber will be dramatically higher. In suc
h dense environments, devices can communicate with each other directly and can attach to multiple base stations (or access points) for simultaneous data communication over multiple paths. This paper explores how networks can maximally enable this multi-path diversity through network programmability. In particular, we propose programmable flow clustering and set policies for inter-group as well as intra-group wireless scheduling. Further, we propose programmable demultiplexing of a single network flow onto multiple paths before the congestion areas and multiplexing them together post congestion areas. We show the benefits of such programmability first for legacy applications that cannot take advantage of multi-homing without such programmability. We then evaluate the benefits for smart applications that take advantage of multi-homing by either opening multiple TCP connections over multiple paths or utilizing a transport protocol such as MP-TCP designed for supporting such environments. More specifically, we built an emulation environment over Mininet for our experiments. Our evaluations using synthetic and trace driven channel models indicate that the proposed programmability in wireless scheduling and flow splitting can increase the throughput substantially for both the legacy applications and the current state of the art.
Software defined networking (SDN) has emerged as a promising paradigm for making the control of communication networks flexible. SDN separates the data packet forwarding plane, i.e., the data plane, from the control plane and employs a central contro
ller. Network virtualization allows the flexible sharing of physical networking resources by multiple users (tenants). Each tenant runs its own applications over its virtual network, i.e., its slice of the actual physical network. The virtualization of SDN networks promises to allow networks to leverage the combined benefits of SDN networking and network virtualization and has therefore attracted significant research attention in recent years. A critical component for virtualizing SDN networks is an SDN hypervisor that abstracts the underlying physical SDN network into multiple logically isolated virtual SDN networks (vSDNs), each with its own controller. We comprehensively survey hypervisors for SDN networks in this article. We categorize the SDN hypervisors according to their architecture into centralized and distributed hypervisors. We furthermore sub-classify the hypervisors according to their execution platform into hypervisors running exclusively on general-purpose compute platforms, or on a combination of general-purpose compute platforms with general- or special-purpose network elements. We exhaustively compare the network attribute abstraction and isolation features of the existing SDN hypervisors. As part of the future research agenda, we outline the development of a performance evaluation framework for SDN hypervisors.
The virtualization and softwarization of modern computer networks enables the definition and fast deployment of novel network services called service chains: sequences of virtualized network functions (e.g., firewalls, caches, traffic optimizers) thr
ough which traffic is routed between source and destination. This paper attends to the problem of admitting and embedding a maximum number of service chains, i.e., a maximum number of source-destination pairs which are routed via a sequence of to-be-allocated, capacitated network functions. We consider an Online variant of this maximum Service Chain Embedding Problem, short OSCEP, where requests arrive over time, in a worst-case manner. Our main contribution is a deterministic O(log L)-competitive online algorithm, under the assumption that capacities are at least logarithmic in L. We show that this is asymptotically optimal within the class of deterministic and randomized online algorithms. We also explore lower bounds for offline approximation algorithms, and prove that the offline problem is APX-hard for unit capacities and small L > 2, and even Poly-APX-hard in general, when there is no bound on L. These approximation lower bounds may be of independent interest, as they also extend to other problems such as Virtual Circuit Routing. Finally, we present an exact algorithm based on 0-1 programming, implying that the general offline SCEP is in NP and by the above hardness results it is NP-complete for constant L.
Due to the presence of buffers in the inner network nodes, each congestion event leads to buffer queueing and thus to an increasing end-to-end delay. In the case of delay sensitive applications, a large delay might not be acceptable and a solution to
properly manage congestion events while maintaining a low end-to-end delay is required. Delay-based congestion algorithms are a viable solution as they target to limit the experienced end-to-end delay. Unfortunately, they do not perform well when sharing the bandwidth with congestion control algorithms not regulated by delay constraints (e.g., loss-based algorithms). Our target is to fill this gap, proposing a novel congestion control algorithm for delay-constrained communication over best effort packet switched networks. The proposed algorithm is able to maintain a bounded queueing delay when competing with other delay-based flows, and avoid starvation when competing with loss-based flows. We adopt the well-known price-based distributed mechanism as congestion control, but: 1) we introduce a novel non-linear mapping between the experienced delay and the price function and 2) we combine both delay and loss information into a single price term based on packet interarrival measurements. We then provide a stability analysis for our novel algorithm and we show its performance in the simulation results carried out in the NS3 framework. Simulation results demonstrate that the proposed algorithm is able to: achieve good intra-protocol fairness properties, control efficiently the end-to-end delay, and finally, protect the flow from starvation when other flows cause the queuing delay to grow excessively.
The recent progress in the area of self-interference cancellation (SIC) design has enabled the development of full-duplex (FD) single and multiple antenna systems. In this paper, we propose a design for FD eNodeB (eNB) and user equipment (UE) for 5G
networks. The use of FD operation enables simultaneous in-band uplink and downlink operation and thereby cutting down the spectrum requirement by half. FD operation requires the same subcarrier allocation to UE in both uplink and downlink. Long Term Evolution LTE) uses orthogonal frequency division multiple access (OFDMA) for downlink. To enable FD operation, we propose using single carrier frequency division multiple access SC-FDMA) for downlink along with the conventional method of using it for uplink. Taking advantage of channel reciprocity, singular value decomposition (SVD) based eamforming in the downlink allows multiple users (MU) to operate on same set of subcarriers. In uplink, frequency domain minimum mean square error (MMSE) equalizer along with successive interference cancellation with optimal ordering (SSIC-OO) algorithm is used to decouple signals of users operating in the same set of subcarriers. The work includes simulations showing efficient FD operation both at UE and eNB for downlink and uplink respectively.
Performance characterization is a fundamental issue in wireless networks for real time routing, wireless network simulation, and etc. There are four basic wireless operations that are required to be modeled, i.e., unicast, anycast, broadcast, and mul
ticast. As observed in many recent works, the temporal and spatial distribution of packet receptions can have significant impact on wireless performance involving multiple links (anycast/broadcast/multicast). However, existing performance models and simulations overlook these two wireless behaviors, leading to biased performance estimation and simulation results. In this paper, we first explicitly identify the necessary 3-Dimension information for wireless performance modeling, i.e., packet reception rate (PRR), PRR spatial distribution, and temporal distribution. We then propose a comprehensive modeling approach considering 3-Dimension Wireless information (called 3DW model). Further, we demonstrate the generality and wide applications of 3DW model by two case studies: 3DWbased network simulation and 3DW-based real time routing protocol. Extensive simulation and testbed experiments have been conducted. The results show that 3DW model achieves much more accurate performance estimation for both anycast and broadcast/multicast. 3DW-based simulation can effectively reserve the end-to-end performance metric of the input empirical traces. 3DW-based routing can select more efficient senders, achieving better transmission efficiency.