ترغب بنشر مسار تعليمي؟ اضغط هنا

Towards High-Performance Network Application Identification With Aggregate-Flow Cache

116   0   0.0 ( 0 )
 نشر من قبل Fei He
 تاريخ النشر 2011
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Classifying network traffic according to their application-layer protocols is an important task in modern networks for traffic management and network security. Existing payload-based or statistical methods of application identification cannot meet the demand of both high performance and accurate identification at the same time. We propose an application identification framework that classifies traffic at aggregate-flow level leveraging aggregate-flow cache. A detailed traffic classifier designed based on this framework is illustrated to improve the throughput of payload-based identification methods. We further optimize the classifier by proposing an efficient design of aggregate-flow cache. The cache design employs a frequency-based, recency-aware replacement algorithm based on the analysis of temporal locality of aggregate-flow cache. Experiments on real-world traces show that our traffic classifier with aggregate-flow cache can reduce up to 95% workload of backend identification engine. The proposed cache replacement algorithm outperforms well-known replacement algorithms, and achieves 90% of the optimal performance using only 15% of memory. The throughput of a payload-based identification system, L7-filter [1], is increased by up to 5.1 times by using our traffic classifier design.



قيم البحث

اقرأ أيضاً

This study is a first attempt to experimentally explore the range of performance bottlenecks that 5G mobile networks can experience. To this end, we leverage a wide range of measurements obtained with a prototype testbed that captures the key aspects of a cloudified mobile network. We investigate the relevance of the metrics and a number of approaches to accurately and efficiently identify bottlenecks across the different locations of the network and layers of the system architecture. Our findings validate the complexity of this task in the multi-layered architecture and highlight the need for novel monitoring approaches that intelligently fuse metrics across network layers and functions. In particular, we find that distributed analytics performs reasonably well both in terms of bottleneck identification accuracy and incurred computational and communication overhead.
Understanding network and application performance are essential for debugging, improving user experience, and performance comparison. Meanwhile, modern mobile systems are optimized for energy-efficient computation and communications that may limit th e performance of network and applications. In recent years, several tools have emerged that analyze network performance of mobile applications in~situ with the help of the VPN service. There is a limited understanding of how these measurement tools and system optimizations affect the network and application performance. In this study, we first demonstrate that mobile systems employ energy-aware system hardware tuning, which affects application performance and network throughput. We next show that the VPN-based application performance measurement tools, such as Lumen, PrivacyGuard, and Video Optimizer, aid in ambiguous network performance measurements and degrade the application performance. Our findings suggest that sound application and network performance measurement on Android devices requires a good understanding of the device, networks, measurement tools, and applications.
A problem which has recently attracted research attention is that of estimating the distribution of flow sizes in internet traffic. On high traffic links it is sometimes impossible to record every packet. Researchers have approached the problem of es timating flow lengths from sampled packet data in two separate ways. Firstly, different sampling methodologies can be tried to more accurately measure the desired system parameters. One such method is the sample-and-hold method where, if a packet is sampled, all subsequent packets in that flow are sampled. Secondly, statistical methods can be used to ``invert the sampled data and produce an estimate of flow lengths from a sample. In this paper we propose, implement and test two variants on the sample-and-hold method. In addition we show how the sample-and-hold method can be inverted to get an estimation of the genuine distribution of flow sizes. Experiments are carried out on real network traces to compare standard packet sampling with three variants of sample-and-hold. The methods are compared for their ability to reconstruct the genuine distribution of flow sizes in the traffic.
In the current Internet, there is no clean way for affected parties to react to poor forwarding performance: when a domain violates its Service Level Agreement (SLA) with a contractual partner, the partner must resort to ad-hoc probing-based monitori ng to determine the existence and extent of the violation. Instead, we propose a new, systematic approach to the problem of forwarding-performance verification. Our mechanism relies on voluntary reporting, allowing each domain to disclose its loss and delay performance to its neighbors; it does not disclose any information regarding the participating domains topology or routing policies beyond what is already publicly available. Most importantly, it enables verifiable performance measurements, i.e., domains cannot abuse it to significantly exaggerate their performance. Finally, our mechanism is tunable, allowing each participating domain to determine how many resources to devote to it independently (i.e., without any inter-domain coordination), exposing a controllable trade-off between performance-verification quality and resource consumption. Our mechanism comes at the cost of deploying modest functionality at the participating domains border routers; we show that it requires reasonable processing and memory resources within modern network capabilities.
The capacity of offloading data and control tasks to the network is becoming increasingly important, especially if we consider the faster growth of network speed when compared to CPU frequencies. In-network compute alleviates the host CPU load by run ning tasks directly in the network, enabling additional computation/communication overlap and potentially improving overall application performance. However, sustaining bandwidths provided by next-generation networks, e.g., 400 Gbit/s, can become a challenge. sPIN is a programming model for in-NIC compute, where users specify handler functions that are executed on the NIC, for each incoming packet belonging to a given message or flow. It enables a CUDA-like acceleration, where the NIC is equipped with lightweight processing elements that process network packets in parallel. We investigate the architectural specialties that a sPIN NIC should provide to enable high-performance, low-power, and flexible packet processing. We introduce PsPIN, a first open-source sPIN implementation, based on a multi-cluster RISC-V architecture and designed according to the identified architectural specialties. We investigate the performance of PsPIN with cycle-accurate simulations, showing that it can process packets at 400 Gbit/s for several use cases, introducing minimal latencies (26 ns for 64 B packets) and occupying a total area of 18.5 mm 2 (22 nm FDSOI).
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا