ترغب بنشر مسار تعليمي؟ اضغط هنا

In Situ Network and Application Performance Measurement on Android Devices and the Imperfections

136   0   0.0 ( 0 )
 نشر من قبل Mohammad Ashraful Hoque
 تاريخ النشر 2020
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Understanding network and application performance are essential for debugging, improving user experience, and performance comparison. Meanwhile, modern mobile systems are optimized for energy-efficient computation and communications that may limit the performance of network and applications. In recent years, several tools have emerged that analyze network performance of mobile applications in~situ with the help of the VPN service. There is a limited understanding of how these measurement tools and system optimizations affect the network and application performance. In this study, we first demonstrate that mobile systems employ energy-aware system hardware tuning, which affects application performance and network throughput. We next show that the VPN-based application performance measurement tools, such as Lumen, PrivacyGuard, and Video Optimizer, aid in ambiguous network performance measurements and degrade the application performance. Our findings suggest that sound application and network performance measurement on Android devices requires a good understanding of the device, networks, measurement tools, and applications.

قيم البحث

اقرأ أيضاً

The growing use of aerial user equipments (UEs) in various applications requires ubiquitous and reliable connectivity for safe control and data exchange between these devices and ground stations. Key questions that need to be addressed when planning the deployment of aerial UEs are whether the cellular network is a suitable candidate for enabling such connectivity, and how the inclusion of aerial UEs might impact the overall network efficiency. This paper provides an in-depth analysis of user and network level performance of a cellular network that serves both unmanned aerial vehicles (UAVs) and ground users in the downlink. Our results show that the favorable propagation conditions that UAVs enjoy due to their height often backfire on them, as the increased co-channel interference received from neighboring ground BSs is not compensated by the improved signal strength. When compared with a ground user in an urban area, our analysis shows that a UAV flying at 100 meters can experience a throughput decrease of a factor 10 and a coverage drop from 76% to 30%. Motivated by these findings, we develop UAV and network based solutions to enable an adequate integration of UAVs into cellular networks. In particular, we show that an optimal tilting of the UAV antenna can increase their coverage and throughput from 23% to 89% and from 3.5 b/s/Hz to 5.8 b/s/Hz, respectively, outperforming ground UEs. Furthermore, our findings reveal that depending on UAV altitude, the aerial user performance can scale with respect to the network density better than that of a ground user. Finally, our results show that network densification and the use of micro cells limit UAV performance. While UAV usage has the potential to increase area spectral efficiency (ASE) of cellular networks with moderate number of cells, they might hamper the development of future ultra dense networks.
We conduct to our knowledge a first measurement study of commercial 5G performance on smartphones by closely examining 5G networks of three carriers (two mmWave carriers, one mid-band carrier) in three U.S. cities. We conduct extensive field tests on 5G performance in diverse urban environments. We systematically analyze the handoff mechanisms in 5G and their impact on network performance. We explore the feasibility of using location and possibly other environmental information to predict the network performance. We also study the app performance (web browsing and HTTP download) over 5G. Our study consumes more than 15 TB of cellular data. Conducted when 5G just made its debut, it provides a baseline for studying how 5G performance evolves, and identifies key research directions on improving 5G users experience in a cross-layer manner. We have released the data collected from our study (referred to as 5Gophers) at https://fivegophers.umn.edu/www20.
High-performance computing (HPC) researchers have long envisioned scenarios where application workflows could be improved through the use of programmable processing elements embedded in the network fabric. Recently, vendors have introduced programmab le Smart Network Interface Cards (SmartNICs) that enable computations to be offloaded to the edge of the network. There is great interest in both the HPC and high-performance data analytics communities in understanding the roles these devices may play in the data paths of upcoming systems. This paper focuses on characterizing both the networking and computing aspects of NVIDIAs new BlueField-2 SmartNIC when used in an Ethernet environment. For the networking evaluation we conducted multiple transfer experiments between processors located at the host, the SmartNIC, and a remote host. These tests illuminate how much processing headroom is available on the SmartNIC during transfers. For the computing evaluation we used the stress-ng benchmark to compare the BlueField-2 to other servers and place realistic bounds on the types of offload operations that are appropriate for the hardware. Our findings from this work indicate that while the BlueField-2 provides a flexible means of processing data at the networks edge, great care must be taken to not overwhelm the hardware. While the host can easily saturate the network link, the SmartNICs embedded processors may not have enough computing resources to sustain more than half the expected bandwidth when using kernel-space packet processing. From a computational perspective, encryption operations, memory operations under contention, and on-card IPC operations on the SmartNIC perform significantly better than the general-purpose servers used for comparisons in our experiments. Therefore, applications that mainly focus on these operations may be good candidates for offloading to the SmartNIC.
New challenges in Astronomy and Astrophysics (AA) are urging the need for a large number of exceptionally computationally intensive simulations. Exascale (and beyond) computational facilities are mandatory to address the size of theoretical problems and data coming from the new generation of observational facilities in AA. Currently, the High Performance Computing (HPC) sector is undergoing a profound phase of innovation, in which the primary challenge to the achievement of the Exascale is the power-consumption. The goal of this work is to give some insights about performance and energy footprint of contemporary architectures for a real astrophysical application in an HPC context. We use a state-of-the-art N-body application that we re-engineered and optimized to exploit the heterogeneous underlying hardware fully. We quantitatively evaluate the impact of computation on energy consumption when running on four different platforms. Two of them represent the current HPC systems (Intel-based and equipped with NVIDIA GPUs), one is a micro-cluster based on ARM-MPSoC, and one is a prototype towards Exascale equipped with ARM-MPSoCs tightly coupled with FPGAs. We investigate the behavior of the different devices where the high-end GPUs excel in terms of time-to-solution while MPSoC-FPGA systems outperform GPUs in power consumption. Our experience reveals that considering FPGAs for computationally intensive application seems very promising, as their performance is improving to meet the requirements of scientific applications. This work can be a reference for future platforms development for astrophysics applications where computationally intensive calculations are required.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا