ترغب بنشر مسار تعليمي؟ اضغط هنا

A critical task in 5G networks with heterogeneous services is spectrum slicing of the shared radio resources, through which each service gets performance guarantees. In this paper, we consider a setup in which a Base Station (BS) should serve two typ es of traffic in the downlink, enhanced mobile broadband (eMBB) and ultra-reliable low-latency communication (URLLC), respectively. Two resource allocation strategies are considered, non-orthogonal multiple access (NOMA) and orthogonal multiple access (OMA). A framework for power minimization is presented, in which the BS knows the channel state information (CSI) of the eMBB users only. Nevertheless, due to the resource sharing, it is shown that this knowledge can be used also to the benefit of the URLLC users. The numerical results show that NOMA leads to a lower power consumption compared to OMA for every simulation parameter under test.
Wireless applications that use high-reliability low-latency links depend critically on the capability of the system to predict link quality. This dependence is especially acute at the high carrier frequencies used by mmWave and THz systems, where the links are susceptible to blockages. Predicting blockages with high reliability requires a large number of data samples to train effective machine learning modules. With the aim of mitigating data requirements, we introduce a framework based on meta-learning, whereby data from distinct deployments are leveraged to optimize a shared initialization that decreases the data set size necessary for any new deployment. Predictors of two different events are studied: (1) at least one blockage occurs in a time window, and (2) the link is blocked for the entire time window. The results show that an RNN-based predictor trained using meta-learning is able to predict blockages after observing fewer samples than predictors trained using standard methods.
Wireless Virtual Reality (VR) and Augmented Reality (AR) will contribute to people increasingly working and socializing remotely. However, the VR/AR experience is very susceptible to various delays and timing discrepancies, which can lead to motion s ickness and discomfort. This paper models and exploits the existence of multiple paths and redundancy to improve the timing performance of wireless VR communications. We consider Multiple Description Coding (MDC), a scheme where the video stream is encoded in Q streams (Q = 2 in this paper) known as descriptors and delivered independently over multiple paths. We also consider an alternating scheme, that simply switches between the paths. We analyze the full distribution of two relevant metrics: the packet delay and the Peak Age of Information (PAoI), which measures the freshness of the information at the receiver. The results show interesting trade-offs between picture quality, frame rate, and latency: full duplication results in fewer lost frames, but a higher latency than schemes with less redundancy. Even the simple alternating scheme can outperform duplication in terms of PAoI, but MDC can exploit the independent decodability of the descriptors to deliver a basic version of the frames faster, while still getting the full-quality frames with a slightly higher delay.
We consider globally optimal precoder design for rate splitting multiple access in Gaussian multiple-input single-output downlink channels with respect to weighted sum rate and energy efficiency maximization. The proposed algorithm solves an instance of the joint multicast and unicast beamforming problem and includes multicast- and unicast-only beamforming as special cases. Numerical results show that it outperforms state-of-the-art algorithms in terms of numerical stability and converges almost twice as fast.
IoT systems typically involve separate data collection and processing, and the former faces the scalability issue when the number of nodes increases. For some tasks, only the result of data fusion is needed. Then, the whole process can be realized in an efficient way, integrating the data collection and fusion in one step by over-the-air computation (AirComp). Its shortcoming, however, is signal distortion when channel gains of nodes are different, which cannot be well solved by transmission power control alone in times of deep fading. To address this issue, in this paper, we propose a multi-slot over-the-air computation (MS-AirComp) framework for the sum estimation in fading channels. Compared with conventional data collection (one slot for each node) and AirComp (one slot for all nodes), MS-AirComp is an alternative policy that lies between them, exploiting multiple slots to improve channel gains so as to facilitate power control. Specifically, the transmissions are distributed over multiple slots and a threshold of channel gain is set for distributed transmission scheduling. Each node transmits its signal only once, in the slot when its channel gain first gets above the threshold, or in the last slot when its channel gain remains below the threshold. Theoretical analysis gives the closed-form of the computation error in fading channels, based on which the optimal parameters are found. Noticing that computation error tends to be reduced at the cost of more transmission power, a method is suggested to control the increase of transmission power. Simulations confirm that the proposed method can effectively reduce computation error, compared with state-of-the-art methods.
Wireless connectivity creates a computing paradigm that merges communication and inference. A basic operation in this paradigm is the one where a device offloads classification tasks to the edge servers. We term this remote classification, with a pot ential to enable intelligent applications. Remote classification is challenged by the finite and variable data rate of the wireless channel, which affects the capability to transfer high-dimensional features and thus limits the classification resolution. We introduce a set of metrics under the name of classification capacity that are defined as the maximum number of classes that can be discerned over a given communication channel while meeting a target classification error probability. The objective is to choose a subset of classes from a library that offers satisfactory performance over a given channel. We treat two cases of subset selection. First, a device can select the subset by pruning the class library until arriving at a subset that meets the targeted error probability while maximizing the classification capacity. Adopting a subspace data model, we prove the equivalence of classification capacity maximization to Grassmannian packing. The results show that the classification capacity grows exponentially with the instantaneous communication rate, and super-exponentially with the dimensions of each data cluster. This also holds for ergodic and outage capacities with fading if the instantaneous rate is replaced with an average rate and a fixed rate, respectively. In the second case, a device has a preference of class subset for every communication rate, which is modeled as an instance of uniformly sampling the library. Without class selection, the classification capacity and its ergodic and outage counterparts are proved to scale linearly with their corresponding communication rates instead of the exponential growth in the last case.
The classical definition of network delay has been recently augmented by the concept of information timeliness, or Age of Information (AoI). We analyze the network delay and the AoI in a multi-hop satellite network that relays status updates from sat ellite 1, receiving uplink traffic from ground devices, to satellite K, using K-2 intermediate satellite nodes. The last node, K, is the closest satellite with connectivity to a ground station. The satellite formation is modeled as a queue network of M/M/1 systems connected in series. The scenario is then generalized for the case in which all satellites receive uplink traffic from ground, and work at the same time as relays of the packets from the previous nodes. The results show that the minimum average AoI is experienced at a decreasing system utilization when the number of nodes is increased. Furthermore, unloading the first nodes of the chain reduces the queueing time and therefore the average AoI. These findings provide insights for designing multi-hop satellite networks for latency-sensitive applications.
Dense constellations of Low Earth Orbit (LEO) small satellites are envisioned to make extensive use of the inter-satellite link (ISL). Within the same orbital plane, the inter-satellite distances are preserved and the links are rather stable. In cont rast, the relative motion between planes makes the inter-plane ISL challenging. In a dense set-up, each spacecraft has several satellites in its coverage volume, but the time duration of each of these links is small and the maximum number of active connections is limited by the hardware. We analyze the matching problem of connecting satellites using the inter-plane ISL for unicast transmissions. We present and evaluate the performance of two solutions to the matching problem with any number of orbital planes and up to two transceivers: a heuristic solution with the aim of minimizing the total cost; and a Markovian solution to maintain the on-going connections as long as possible. The Markovian algorithm reduces the time needed to solve the matching up to 1000x and 10x with respect to the optimal solution and to the heuristic solution, respectively, without compromising the total cost. Our model includes power adaptation and optimizes the network energy consumption as the exemplary cost in the evaluations, but any other QoS-oriented KPI can be used instead.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا