This paper criticises the notion that long-range dependence is an important contributor to the queuing behaviour of real Internet traffic. The idea is questioned in two different ways. Firstly, a class of models used to simulate Internet traffic is shown to have important theoretical flaws. It is shown that this behaviour is inconsistent with the behaviour of real traffic traces. Secondly, the notion that long-range correlations significantly affects the queuing performance of traffic is investigated by destroying those correlations in real traffic traces (by reordering). It is shown that the longer ranges of correlations are not important except in one case with an extremely high load.
In this paper, we study the stability of light traffic achieved by a scheduling algorithm which is suitable for heterogeneous traffic networks. Since analyzing a scheduling algorithm is intractable using the conventional mathematical tool, our goal is to minimize the largest queue-overflow probability achieved by the algorithm. In the large deviation setting, this problem is equivalent to maximizing the asymptotic decay rate of the largest queue-overflow probability. We first derive an upper bound on the decay rate of the queue overflow probability as the queue overflow threshold approaches infinity. Then, we study several structural properties of the minimum-cost-path to overflow of the queue with the largest length, which is basically equivalent to the decay rate of the largest queue-overflow probability. Given these properties, we prove that the queue with the largest length follows a sample path with linear increment. For certain parameter value, the scheduling algorithm is asymptotically optimal in reducing the largest queue length. Through numerical results, we have shown the large deviation properties of the queue length typically used in practice while varying one parameter of the algorithm.
The dynamics of User Datagram Protocol (UDP) traffic over Ethernet between two computers are analyzed using nonlinear dynamics which shows that there are two clear regimes in the data flow: free flow and saturated. The two most important variables affecting this are the packet size and packet flow rate. However, this transition is due to a transcritical bifurcation rather than phase transition in models such as in vehicle traffic or theorized large-scale computer network congestion. It is hoped this model will help lay the groundwork for further research on the dynamics of networks, especially computer networks.
We develop analytical models for estimating the energy spent by stations (STAs) in infrastructure WLANs when performing TCP controlled file downloads. We focus on the energy spent in radio communication when the STAs are in the Continuously Active Mode (CAM), or in the static Power Save Mode (PSM). Our approach is to develop accurate models for obtaining the fraction of times the STA radios spend in idling, receiving and transmitting. We discuss two traffic models for each mode of operation: (i) each STA performs one large file download, and (ii) the STAs perform short file transfers. We evaluate the rate of STA energy expenditure with long file downloads, and show that static PSM is worse than just using CAM. For short file downloads we compute the number of file downloads that can be completed with given battery capacity, and show that PSM performs better than CAM for this case. We provide a validation of our analytical models using the NS-2 simulator. In contrast to earlier work on analytical modeling of PSM, our models that capture the details of the interactions between the 802.11 MAC in PSM and certain aspects of TCP.
We propose a limited packet-delivering capacity model for traffic dynamics in scale-free networks. In this model, the total nodes packet-delivering capacity is fixed, and the allocation of packet-delivering capacity on node $i$ is proportional to $k_{i}^{phi}$, where $k_{i}$ is the degree of node $i$ and $phi$ is a adjustable parameter. We have applied this model on the shortest path routing strategy as well as the local routing strategy, and found that there exists an optimal value of parameter $phi$ leading to the maximal network capacity under both routing strategies. We provide some explanations for the emergence of optimal $phi$.
Packet classification according to multi-field ruleset is a key component for many network applications. Emerging software defined networking and cloud computing need to update the rulesets frequently for flexible policy configuration. Their success depends on the availability of the new generation of classifiers that can support both fast ruleset updating and high-speed packet classification. However, existing packet classification approaches focus either on high-speed packet classification or fast rule update, but no known scheme meets both requirements. In this paper, we propose Range-vector Hash (RVH) to effectively accelerate the packet classification with a hash-based algorithm while ensuring the fast rule update. RVH is built on our key observation that the number of distinct combinations of each field prefix lengths is not evenly distributed. To reduce the number of hash tables for fast classification, we introduce a novel concept range-vector with each specified the length range of each field prefix of the projected rules. RVH can overcome the major obstacle that hinders hash-based packet classification by balancing the number of hash tables and the probability of hash collision. Experimental results demonstrate that RVH can achieve the classification speed up to 15.7 times and the update speed up to 2.3 times that of the state-of-the-art algorithms on average, while only consuming 44% less memory.