ترغب بنشر مسار تعليمي؟ اضغط هنا

Federated Echo State Learning for Minimizing Breaks in Presence in Wireless Virtual Reality Networks

76   0   0.0 ( 0 )
 نشر من قبل Mingzhe Chen
 تاريخ النشر 2018
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

In this paper, the problem of enhancing the virtual reality (VR) experience for wireless users is investigated by minimizing the occurrence of breaks in presence (BIP) that can detach the users from their virtual world. To measure the BIP for wireless VR users, a novel model that jointly considers the VR application type, transmission delay, VR video quality, and users awareness of the virtual environment is proposed. In the developed model, the base stations (BSs) transmit VR videos to the wireless VR users using directional transmission links so as to provide high data rates for the VR users, thus, reducing the number of BIP for each user. Since the body movements of a VR user may result in a blockage of its wireless link, the location and orientation of VR users must also be considered when minimizing BIP. The BIP minimization problem is formulated as an optimization problem which jointly considers the predictions of users locations, orientations, and their BS association. To predict the orientation and locations of VR users, a distributed learning algorithm based on the machine learning framework of deep (ESNs) is proposed. The proposed algorithm uses concept from federated learning to enable multiple BSs to locally train their deep ESNs using their collected data and cooperatively build a learning model to predict the entire users locations and orientations. Using these predictions, the user association policy that minimizes BIP is derived. Simulation results demonstrate that the developed algorithm reduces the users BIP by up to 16% and 26%, respectively, compared to centralized ESN and deep learning algorithms.



قيم البحث

اقرأ أيضاً

Motivated by the increasing computational capacity of wireless user equipments (UEs), e.g., smart phones, tablets, or vehicles, as well as the increasing concerns about sharing private data, a new machine learning model has emerged, namely federated learning (FL), that allows a decoupling of data acquisition and computation at the central unit. Unlike centralized learning taking place in a data center, FL usually operates in a wireless edge network where the communication medium is resource-constrained and unreliable. Due to limited bandwidth, only a portion of UEs can be scheduled for updates at each iteration. Due to the shared nature of the wireless medium, transmissions are subjected to interference and are not guaranteed. The performance of FL system in such a setting is not well understood. In this paper, an analytical model is developed to characterize the performance of FL in wireless networks. Particularly, tractable expressions are derived for the convergence rate of FL in a wireless setting, accounting for effects from both scheduling schemes and inter-cell interference. Using the developed analysis, the effectiveness of three different scheduling policies, i.e., random scheduling (RS), round robin (RR), and proportional fair (PF), are compared in terms of FL convergence rate. It is shown that running FL with PF outperforms RS and RR if the network is operating under a high signal-to-interference-plus-noise ratio (SINR) threshold, while RR is more preferable when the SINR threshold is low. Moreover, the FL convergence rate decreases rapidly as the SINR threshold increases, thus confirming the importance of compression and quantization of the update parameters. The analysis also reveals a trade-off between the number of scheduled UEs and subchannel bandwidth under a fixed amount of available spectrum.
77 - Mengyuan Lee , Guanding Yu , 2019
Resource allocation in wireless networks, such as device-to-device (D2D) communications, is usually formulated as mixed integer nonlinear programming (MINLP) problems, which are generally NP-hard and difficult to get the optimal solutions. Traditiona l methods to solve these MINLP problems are all based on mathematical optimization techniques, such as the branch-and-bound (B&B) algorithm that converges slowly and has forbidding complexity for real-time implementation. Therefore, machine leaning (ML) has been used recently to address the MINLP problems in wireless communications. In this paper, we use imitation learning method to accelerate the B&B algorithm. With invariant problem-independent features and appropriate problem-dependent feature selection for D2D communications, a good auxiliary prune policy can be learned in a supervised manner to speed up the most time-consuming branch process of the B&B algorithm. Moreover, we develop a mixed training strategy to further reinforce the generalization ability and a deep neural network (DNN) with a novel loss function to achieve better dynamic control over optimality and computational complexity. Extensive simulation demonstrates that the proposed method can achieve good optimality and reduce computational complexity simultaneously.
The deployment of federated learning in a wireless network, called federated edge learning (FEEL), exploits low-latency access to distributed mobile data to efficiently train an AI model while preserving data privacy. In this work, we study the spati al (i.e., spatially averaged) learning performance of FEEL deployed in a large-scale cellular network with spatially random distributed devices. Both the schemes of digital and analog transmission are considered, providing support of error-free uploading and over-the-air aggregation of local model updates by devices. The derived spatial convergence rate for digital transmission is found to be constrained by a limited number of active devices regardless of device density and converges to the ground-true rate exponentially fast as the number grows. The population of active devices depends on network parameters such as processing gain and signal-to-interference threshold for decoding. On the other hand, the limit does not exist for uncoded analog transmission. In this case, the spatial convergence rate is slowed down due to the direct exposure of signals to the perturbation of inter-cell interference. Nevertheless, the effect diminishes when devices are dense as interference is averaged out by aggressive over-the-air aggregation. In terms of learning latency (in second), analog transmission is preferred to the digital scheme as the former dramatically reduces multi-access latency by enabling simultaneous access.
Relay networks having $n$ source-to-destination pairs and $m$ half-duplex relays, all operating in the same frequency band in the presence of block fading, are analyzed. This setup has attracted significant attention and several relaying protocols ha ve been reported in the literature. However, most of the proposed solutions require either centrally coordinated scheduling or detailed channel state information (CSI) at the transmitter side. Here, an opportunistic relaying scheme is proposed, which alleviates these limitations. The scheme entails a two-hop communication protocol, in which sources communicate with destinations only through half-duplex relays. The key idea is to schedule at each hop only a subset of nodes that can benefit from emph{multiuser diversity}. To select the source and destination nodes for each hop, it requires only CSI at receivers (relays for the first hop, and destination nodes for the second hop) and an integer-value CSI feedback to the transmitters. For the case when $n$ is large and $m$ is fixed, it is shown that the proposed scheme achieves a system throughput of $m/2$ bits/s/Hz. In contrast, the information-theoretic upper bound of $(m/2)log log n$ bits/s/Hz is achievable only with more demanding CSI assumptions and cooperation between the relays. Furthermore, it is shown that, under the condition that the product of block duration and system bandwidth scales faster than $log n$, the achievable throughput of the proposed scheme scales as $Theta ({log n})$. Notably, this is proven to be the optimal throughput scaling even if centralized scheduling is allowed, thus proving the optimality of the proposed scheme in the scaling law sense.
Virtual reality (VR) over wireless is emerging as an important use case of 5G networks. Immersive VR experience requires the delivery of huge data at ultra-low latency, thus demanding ultra-high transmission rate. This challenge can be largely addres sed by the recent network architecture known as mobile edge computing (MEC), which enables caching and computing capabilities at the edge of wireless networks. This paper presents a novel MEC-based mobile VR delivery framework that is able to cache parts of the field of views (FOVs) in advance and run certain post-processing procedures at the mobile VR device. To optimize resource allocation at the mobile VR device, we formulate a joint caching and computing decision problem to minimize the average required transmission rate while meeting a given latency constraint. When FOVs are homogeneous, we obtain a closed-form expression for the optimal joint policy which reveals interesting communications-caching-computing tradeoffs. When FOVs are heterogeneous, we obtain a local optima of the problem by transforming it into a linearly constrained indefinite quadratic problem then applying concave convex procedure. Numerical results demonstrate great promises of the proposed mobile VR delivery framework in saving communication bandwidth while meeting low latency requirement.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا