This study is a first attempt to experimentally explore the range of performance bottlenecks that 5G mobile networks can experience. To this end, we leverage a wide range of measurements obtained with a prototype testbed that captures the key aspects of a cloudified mobile network. We investigate the relevance of the metrics and a number of approaches to accurately and efficiently identify bottlenecks across the different locations of the network and layers of the system architecture. Our findings validate the complexity of this task in the multi-layered architecture and highlight the need for novel monitoring approaches that intelligently fuse metrics across network layers and functions. In particular, we find that distributed analytics performs reasonably well both in terms of bottleneck identification accuracy and incurred computational and communication overhead.
Crowdsourcing mobile users network performance has become an effective way of understanding and improving mobile network performance and user quality-of-experience. However, the current measurement method is still based on the landline measurement paradigm in which a measurement app measures the path to fixed (measurement or web) servers. In this work, we introduce a new paradigm of measuring per-app mobile network performance. We design and implement MopEye, an Android app to measure network round-trip delay for each app whenever there is app traffic. This opportunistic measurement can be conducted automatically without users intervention. Therefore, it can facilitate a large-scale and long-term crowdsourcing of mobile network performance. In the course of implementing MopEye, we have overcome a suite of challenges to make the continuous latency monitoring lightweight and accurate. We have deployed MopEye to Google Play for an IRB-approved crowdsourcing study in a period of ten months, which obtains over five million measurements from 6,266 Android apps on 2,351 smartphones. The analysis reveals a number of new findings on the per-app network performance and mobile DNS performance.
In contrast to the classic fashion for designing distributed end-to-end (e2e) TCP schemes for cellular networks (CN), we explore another design space by having the CN assist the task of the transport control. We show that in the emerging cellular architectures such as mobile/multi-access edge computing (MEC), where the servers are located close to the radio access network (RAN), significant improvements can be achieved by leveraging the nature of the logically centralized network measurements at the RAN and passing information such as its minimum e2e delay and access link capacity to each server. Particularly, a Network Assistance module (located at the mobile edge) will pair up with wireless scheduler to provide feedback information to each server and facilitate the task of congestion control. To that end, we present two Network Assisted schemes called NATCP (a clean-slate design replacing TCP at end-hosts) and NACubic (a backward compatible design requiring no change for TCP at end-hosts). Our preliminary evaluations using real cellular traces show that both schemes dramatically outperform existing schemes both in single-flow and multi-flow scenarios.
Classifying network traffic according to their application-layer protocols is an important task in modern networks for traffic management and network security. Existing payload-based or statistical methods of application identification cannot meet the demand of both high performance and accurate identification at the same time. We propose an application identification framework that classifies traffic at aggregate-flow level leveraging aggregate-flow cache. A detailed traffic classifier designed based on this framework is illustrated to improve the throughput of payload-based identification methods. We further optimize the classifier by proposing an efficient design of aggregate-flow cache. The cache design employs a frequency-based, recency-aware replacement algorithm based on the analysis of temporal locality of aggregate-flow cache. Experiments on real-world traces show that our traffic classifier with aggregate-flow cache can reduce up to 95% workload of backend identification engine. The proposed cache replacement algorithm outperforms well-known replacement algorithms, and achieves 90% of the optimal performance using only 15% of memory. The throughput of a payload-based identification system, L7-filter [1], is increased by up to 5.1 times by using our traffic classifier design.
With the proliferation of mobile computing devices, the demand for continuous network connectivity regardless of physical location has spurred interest in the use of mobile ad hoc networks. Since Transmission Control Protocol (TCP) is the standard network protocol for communication in the internet, any wireless network with Internet service need to be compatible with TCP. TCP is tuned to perform well in traditional wired networks, where packet losses occur mostly because of congestion. However, TCP connections in Ad-hoc mobile networks are plagued by problems such as high bit error rates, frequent route changes, multipath routing and temporary network partitions. The throughput of TCP over such connection is not satisfactory, because TCP misinterprets the packet loss or delay as congestion and invokes congestion control and avoidance algorithm. In this research, the performance of TCP in Adhoc mobile network with high Bit Error rate (BER) and mobility is studied and investigated. Simulation model is implemented and experiments are performed using the Network Simulatior 2 (NS2).
Despite the growing popularity of video streaming over the Internet, problems such as re-buffering and high startup latency continue to plague users. In this paper, we present an end-to-end characterization of Yahoos video streaming service, analyzing over 500 million video chunks downloaded over a two-week period. We gain unique visibility into the causes of performance degradation by instrumenting both the CDN server and the client player at the chunk level, while also collecting frequent snapshots of TCP variables from the server network stack. We uncover a range of performance issues, including an asynchronous disk-read timer and cache misses at the server, high latency and latency variability in the network, and buffering delays and dropped frames at the client. Looking across chunks in the same session, or destined to the same IP prefix, we see how some performance problems are relatively persistent, depending on the videos popularity, the distance between the client and server, and the clients operating system, browser, and Flash runtime.