ترغب بنشر مسار تعليمي؟ اضغط هنا

Technical Rate of Substitution of Spectrum in Future Mobile Broadband Provisioning

57   0   0.0 ( 0 )
 نشر من قبل Yanpeng Yang
 تاريخ النشر 2015
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Dense deployment of base stations (BSs) and multi-antenna techniques are considered key enablers for future mobile networks. Meanwhile, spectrum sharing techniques and utilization of higher frequency bands make more bandwidth available. An important question for future system design is which element is more effective than others. In this paper, we introduce the concept of technical rate of substitution (TRS) from microeconomics and study the TRS of spectrum in terms of BS density and antenna number per BS. Numerical results show that TRS becomes higher with increasing user data rate requirement, suggesting that spectrum is the most effective means of provisioning extremely fast mobile broadband.

قيم البحث

اقرأ أيضاً

In this paper, we study the trade-off between reliability and latency in machine type communication (MTC), which consists of single transmitter and receiver in the presence of Rayleigh fading channel. We assume that the transmitter does not know the channel conditions, therefore it would be transmitting information over a fixed rate. The fixed rate transmission is modeled as a two-state continuous-time Markov process, where the optimum transmission rate is obtained. Moreover, we conduct a performance analysis for different arrival traffic originated from MTC device via effective rate transmission. We consider that the arrival traffic is modeled as a Markovian process namely Discrete-Time Markov process, Fluid Markov process, and Markov Modulated Poisson process, under delay violation constraints. Using effective bandwidth and effective capacity theories, we evaluate the trade-off between reliability-latency and identify QoS (Quality of Service) requirement, and derive lower and upper bounds for the effective capacity subject to channel memory decay rate limits.
343 - Tarun Mangla 2021
Understanding and improving mobile broadband deployment is critical to bridging the digital divide and targeting future investments. Yet accurately mapping mobile coverage is challenging. In 2019, the Federal Communications Commission (FCC) released a report on the progress of mobile broadband deployment in the United States. This report received a significant amount of criticism with claims that the cellular coverage, mainly available through Long-Term Evolution (LTE), was over-reported in some areas, especially those that are rural and/or tribal [12]. We evaluate the validity of this criticism using a quantitative analysis of both the dataset from which the FCC based its report and a crowdsourced LTE coverage dataset. Our analysis is focused on the state of New Mexico, a region characterized by diverse mix of demographics-geography and poor broadband access. We then performed a controlled measurement campaign in northern New Mexico during May 2019. Our findings reveal significant disagreement between the crowdsourced dataset and the FCC dataset regarding the presence of LTE coverage in rural and tribal census blocks, with the FCC dataset reporting higher coverage than the crowdsourced dataset. Interestingly, both the FCC and the crowdsourced data report higher coverage compared to our on-the-ground measurements. Based on these findings, we discuss our recommendations for improved LTE coverage measurements, whose importance has only increased in the COVID-19 era of performing work and school from home, especially in rural and tribal areas.
The concept of fog computing is centered around providing computation resources at the edge of network, thereby reducing the latency and improving the quality of service. However, it is still desirable to investigate how and where at the edge of the network the computation capacity should be provisioned. To this end, we propose a hierarchical capacity provisioning scheme. In particular, we consider a two-tier network architecture consisting of shallow and deep cloudlets and explore the benefits of hierarchical capacity based on queueing analysis. Moreover, we explore two different network scenarios in which the network delay between the two tiers is negligible as well as the case that the deep cloudlet is located somewhere deeper in the network and thus the delay is significant. More importantly, we model the first network delay scenario with bufferless shallow cloudlets as well as the second scenario with finite-size buffer shallow cloudlets, and formulate an optimization problem for each model. We also use stochastic ordering to solve the optimization problem formulated for the first model and an upper bound based technique is proposed for the second model. The performance of the proposed scheme is evaluated via simulations in which we show the accuracy of the proposed upper bound technique as well as the queue length estimation approach for both randomly generated input and real trace data.
Predictably sharing the network is critical to achieving high utilization in the datacenter. Past work has focussed on providing bandwidth to endpoints, but often we want to allocate resources among multi-node services. In this paper, we present Parl ey, which provides service-centric minimum bandwidth guarantees, which can be composed hierarchically. Parley also supports service-centric weighted sharing of bandwidth in excess of these guarantees. Further, we show how to configure these policies so services can get low latencies even at high network load. We evaluate Parley on a multi-tiered oversubscribed network connecting 90 machines, each with a 10Gb/s network interface, and demonstrate that Parley is able to meet its goals.
157 - Sarabjot Singh 2019
Wireless traffic attributable to machine learning (ML) inference workloads is increasing with the proliferation of applications and smart wireless devices leveraging ML inference. Owing to limited compute capabilities at these edge devices, achieving high inference accuracy often requires coordination with a remote compute node or cloud over the wireless cellular network. The accuracy of this distributed inference is, thus, impacted by the communication rate and reliability offered by the cellular network. In this paper, an analytical framework is proposed to characterize inference accuracy as a function of cellular network design. Using the developed framework, it is shown that cellular network should be provisioned with a minimum density of access points (APs) to guarantee a target inference accuracy, and the inference accuracy achievable at asymptotically high AP density is limited by the air-interface bandwidth. Furthermore, the minimum accuracy required of edge inference to deliver a target inference accuracy is shown to be inversely proportional to the density of APs and the bandwidth.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا