No Arabic abstract
Future wireless access networks need to support diversified quality of service (QoS) metrics required by various types of Internet-of-Things (IoT) devices, e.g., age of information (AoI) for status generating sources and ultra low latency for safety information in vehicular networks. In this paper, a novel inner-state driven random access (ISDA) framework is proposed based on distributed policy learning, in particular a cross-entropy method. Conventional random access schemes, e.g., $p$-CSMA, assume state-less terminals, and thus assigning equal priorities to all. In ISDA, the inner-states of terminals are described by a time-varying state vector, and the transmission probabilities of terminals in the contention period are determined by their respective inner-states. Neural networks are leveraged to approximate the function mappings from inner-states to transmission probabilities, and an iterative approach is adopted to improve these mappings in a distributed manner. Experiment results show that ISDA can improve the QoS of heterogeneous terminals simultaneously compared to conventional CSMA schemes.
Multi-user multi-armed bandits have emerged as a good model for uncoordinated spectrum access problems. In this paper we consider the scenario where users cannot communicate with each other. In addition, the environment may appear differently to different users, ${i.e.}$, the mean rewards as observed by different users for the same channel may be different. With this setup, we present a policy that achieves a regret of $O (log{T})$. This paper has been accepted at Asilomar Conference on Signals, Systems, and Computers 2019.
Non-Orthogonal Multiple Access (NOMA) and caching are two proposed approaches to increase the capacity of future 5G wireless systems. Typically in NOMA systems, signals at the receiver are decoded using successive interference cancellation in order to achieve capacity in multi-user systems. The leveraging of caching in the physical layer to further improve on the benefits of NOMA is investigated, which is termed cache-aided NOMA. Specific attention is given to the caching cases where the users with weaker channel conditions possess a cache of the information requested by a user with a stronger channel condition. The probability that any of the users is in outage for any of the rates required for this NOMA system, defined as the union-outage, is derived for the case of fixed-power allocation, and the power allocation strategy that minimizes the union-outage probability is derived. Simulation results confirm the analytical results, which demonstrate the benefits of cache-aided NOMA on reducing the union-outages probability.
Energy Efficiency (EE) is a big issue in 5th Generation Wireless Communications (5G) on condition that the number of access User Equipments (UEs) are exploding and more antennas should be equipped in one Base Station (BS). In EE studies, prior literatures focus on the energy consumption of single separated BS coverage area or through scheduling mechanism or network coding method. But some other elements are ignored in those literatures, such as the energy consumption of machine room, circuit, etc. In this paper, to be more closer to the reality, based on the Cloud Radio Access Network (C-RAN), we modify its traditional structure for easier layout of sleeping mechanism in the real world, study the EE issue within a comprehensive view while taking more elements into consideration. We modified the traditional C-RAN structure with the purpose of much easily adopting the sleeping mechanism with on-off selection method. Afterwards, the EE issue is modeled into a mathematical optimizing problem and its solution is given by a tractable method. The analysis of sum capacity in one cluster of this modified structure is addressed first. Then based on the analysis, the EE issue is studied with a comprehensive view while taking more elements into consideration. In the next step, we convert it into an optimization problem and give its solution with the sleeping techniques. Comparing with prior works, this proposal is of better performance for the merit of comprehensive vision and easier layout characteristic.
This paper designs a cooperative activity detection framework for massive grant-free random access in the sixth-generation (6G) cell-free wireless networks based on the covariance of the received signals at the access points (APs). In particular, multiple APs cooperatively detect the device activity by only exchanging the low-dimensional intermediate local information with their neighbors. The cooperative activity detection problem is non-smooth and the unknown variables are coupled with each other for which conventional approaches are inapplicable. Therefore, this paper proposes a covariance-based algorithm by exploiting the sparsity-promoting and similarity-promoting terms of the device state vectors among neighboring APs. An approximate splitting approach is proposed based on the proximal gradient method for solving the formulated problem. Simulation results show that the proposed algorithm is efficient for large-scale activity detection problems while requires shorter pilot sequences compared with the state-of-art algorithms in achieving the same system performance.
Cloud applications are increasingly shifting from large monolithic services, to large numbers of loosely-coupled, specialized microservices. Despite their advantages in terms of facilitating development, deployment, modularity, and isolation, microservices complicate resource management, as dependencies between them introduce backpressure effects and cascading QoS violations. We present Sinan, a data-driven cluster manager for interactive cloud microservices that is online and QoS-aware. Sinan leverages a set of scalable and validated machine learning models to determine the performance impact of dependencies between microservices, and allocate appropriate resources per tier in a way that preserves the end-to-end tail latency target. We evaluate Sinan both on dedicated local clusters and large-scale deployments on Google Compute Engine (GCE) across representative end-to-end applications built with microservices, such as social networks and hotel reservation sites. We show that Sinan always meets QoS, while also maintaining cluster utilization high, in contrast to prior work which leads to unpredictable performance or sacrifices resource efficiency. Furthermore, the techniques in Sinan are explainable, meaning that cloud operators can yield insights from the ML models on how to better deploy and design their applications to reduce unpredictable performance.