ترغب بنشر مسار تعليمي؟ اضغط هنا

Understanding Model Drift in a Large Cellular Network

105   0   0.0 ( 0 )
 نشر من قبل Shinan Liu
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Operational networks are increasingly using machine learning models for a variety of tasks, including detecting anomalies, inferring application performance, and forecasting demand. Accurate models are important, yet accuracy can degrade over time due to concept drift, whereby either the characteristics of the data change over time (data drift) or the relationship between the features and the target predictor change over time (model drift). Drift is important to detect because changes in properties of the underlying data or relationships to the target prediction can require model retraining, which can be time-consuming and expensive. Concept drift occurs in operational networks for a variety of reasons, ranging from software upgrades to seasonality to changes in user behavior. Yet, despite the prevalence of drift in networks, its extent and effects on prediction accuracy have not been extensively studied. This paper presents an initial exploration into concept drift in a large cellular network in the United States for a major metropolitan area in the context of demand forecasting. We find that concept drift arises largely due to data drift, and it appears across different key performance indicators (KPIs), models, training set sizes, and time intervals. We identify the sources of concept drift for the particular problem of forecasting downlink volume. Weekly and seasonal patterns introduce both high and low-frequency model drift, while disasters and upgrades result in sudden drift due to exogenous shocks. Regions with high population density, lower traffic volumes, and higher speeds also tend to correlate with more concept drift. The features that contribute most significantly to concept drift are User Equipment (UE) downlink packets, UE uplink packets, and Real-time Transport Protocol (RTP) total received packets.

قيم البحث

اقرأ أيضاً

Datacenters provide cost-effective and flexible access to scalable compute and storage resources necessary for todays cloud computing needs. A typical datacenter is made up of thousands of servers connected with a large network and usually managed by one operator. To provide quality access to the variety of applications and services hosted on datacenters and maximize performance, it deems necessary to use datacenter networks effectively and efficiently. Datacenter traffic is often a mix of several classes with different priorities and requirements. This includes user-generated interactive traffic, traffic with deadlines, and long-running traffic. To this end, custom transport protocols and traffic management techniques have been developed to improve datacenter network performance. In this tutorial paper, we review the general architecture of datacenter networks, various topologies proposed for them, their traffic properties, general traffic control challenges in datacenters and general traffic control objectives. The purpose of this paper is to bring out the important characteristics of traffic control in datacenters and not to survey all existing solutions (as it is virtually impossible due to massive body of existing research). We hope to provide readers with a wide range of options and factors while considering a variety of traffic control mechanisms. We discuss various characteristics of datacenter traffic control including management schemes, transmission control, traffic shaping, prioritization, load balancing, multipathing, and traffic scheduling. Next, we point to several open challenges as well as new and interesting networking paradigms. At the end of this paper, we briefly review inter-datacenter networks that connect geographically dispersed datacenters which have been receiving increasing attention recently and pose interesting and novel research problems.
The growing size of data center and HPC networks pose unprecedented requirements on the scalability of simulation infrastructure. The ability to simulate such large-scale interconnects on a simple PC would facilitate research efforts. Unfortunately, as we first show in this work, existing shared-memory packet-level simulators do not scale to the sizes of the largest networks considered today. We then illustrate a feasibility analysis and a set of enhancements that enable a simple packet-level htsim simulator to scale to the unprecedented simulation sizes on a single PC. Our code is available online and can be used to design novel schemes in the coming era of omnipresent data centers and HPC clusters.
In this paper, we propose a transceiver architecture for full-duplex (FD) eNodeB (eNB) and FD user equipment (UE) transceiver. For FD communication,.i.e., simultaneous in-band uplink and downlink operation, same subcarriers can be allocated to UE in both uplink and downlink. Hence, contrary to traditional LTE, we propose using single-carrier frequency division multiple accesses (SC-FDMA) for downlink along with the conventional method of using it for uplink. The use of multiple antennas at eNB and singular value decomposition (SVD) in the downlink allows multiple users (MU) to operate on the same set of ubcarriers. In the uplink, successive interference cancellation with optimal ordering (SSIC-OO) algorithm is used to decouple signals of UEs operating in the same set of subcarriers. A smart antenna approach is adopted which prevents interference, in downlink of a UE, from uplink signals of other UEs sharing same subcarriers. The approach includes using multiple antennas at UEs to form directed beams towards eNode and nulls towards other UEs. The proposed architecture results in significant improvement of the overall spectrum efficiency per cell of the cellular network.
This paper presents the design and implementation of signaling splitting scheme in hyper-cellular network on a software defined radio platform. Hyper-cellular network is a novel architecture of future mobile communication systems in which signaling a nd data are decoupled at the air interface to mitigate the signaling overhead and allow energy efficient operation of base stations. On an open source software defined radio platform, OpenBTS, we investigate the feasibility of signaling splitting for GSM protocol and implement a novel system which can prove the proposed concept. Standard GSM handsets can camp on the network with the help of signaling base station, and data base station will be appointed to handle phone calls on demand. Our work initiates the systematic approach to study hyper-cellular concept in real wireless environment with both software and hardware implementations.
Underwater wireless optical communications is an emerging solution to the expanding demand for broadband links in oceans and seas. In this paper, a cellular underwater wireless optical code division multiple-access (UW-OCDMA) network is proposed to p rovide broadband links for commercial and military applications. The optical orthogonal codes (OOC) are employed as signature codes of underwater mobile users. Fundamental key aspects of the network such as its backhaul architecture, its potential applications and its design challenges are presented. In particular, the proposed network is used as infrastructure of centralized, decentralized and relay-assisted underwater sensor networks for high-speed real-time monitoring. Furthermore, a promising underwater localization and positioning scheme based on this cellular network is presented. Finally, probable design challenges such as cell edge coverage, blockage avoidance, power control and increasing the network capacity are addressed.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا