ترغب بنشر مسار تعليمي؟ اضغط هنا

Proactive Received Power Prediction Using Machine Learning and Depth Images for mmWave Networks

112   0   0.0 ( 0 )
 نشر من قبل Takayuki Nishio
 تاريخ النشر 2018
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

This study demonstrates the feasibility of the proactive received power prediction by leveraging spatiotemporal visual sensing information toward the reliable millimeter-wave (mmWave) networks. Since the received power on a mmWave link can attenuate aperiodically due to a human blockage, the long-term series of the future received power cannot be predicted by analyzing the received signals before the blockage occurs. We propose a novel mechanism that predicts a time series of the received power from the next moment to even several hundred milliseconds ahead. The key idea is to leverage the camera imagery and machine learning (ML). The time-sequential images can involve the spatial geometry and the mobility of obstacles representing the mmWave signal propagation. ML is used to build the prediction model from the dataset of sequential images labeled with the received power in several hundred milliseconds ahead of when each image is obtained. The simulation and experimental evaluations using IEEE 802.11ad devices and a depth camera show that the proposed mechanism employing convolutional LSTM predicted a time series of the received power in up to 500 ms ahead at an inference time of less than 3 ms with a root-mean-square error of 3.5 dB.

قيم البحث

اقرأ أيضاً

For millimeter-wave networks, this paper presents a paradigm shift for leveraging time-consecutive camera images in handover decision problems. While making handover decisions, it is important to predict future long-term performance---e.g., the cumul ative sum of time-varying data rates---proactively to avoid making myopic decisions. However, this study experimentally notices that a time-variation in the received powers is not necessarily informative for proactively predicting the rapid degradation of data rates caused by moving obstacles. To overcome this challenge, this study proposes a proactive framework wherein handover timings are optimized while obstacle-caused data rate degradations are predicted before the degradations occur. The key idea is to expand a state space to involve time consecutive camera images, which comprises informative features for predicting such data rate degradations. To overcome the difficulty in handling the large dimensionality of the expanded state space, we use a deep reinforcement learning for deciding the handover timings. The evaluations performed based on the experimentally obtained camera images and received powers demonstrate that the expanded state space facilitates (i) the prediction of obstacle-caused data rate degradations from 500 ms before the degradations occur and (ii) superior performance to a handover framework without the state space expansion
With the continuous trend of data explosion, delivering packets from data servers to end users causes increased stress on both the fronthaul and backhaul traffic of mobile networks. To mitigate this problem, caching popular content closer to the end- users has emerged as an effective method for reducing network congestion and improving user experience. To find the optimal locations for content caching, many conventional approaches construct various mixed integer linear programming (MILP) models. However, such methods may fail to support online decision making due to the inherent curse of dimensionality. In this paper, a novel framework for proactive caching is proposed. This framework merges model-based optimization with data-driven techniques by transforming an optimization problem into a grayscale image. For parallel training and simple design purposes, the proposed MILP model is first decomposed into a number of sub-problems and, then, convolutional neural networks (CNNs) are trained to predict content caching locations of these sub-problems. Furthermore, since the MILP model decomposition neglects the internal effects among sub-problems, the CNNs outputs have the risk to be infeasible solutions. Therefore, two algorithms are provided: the first uses predictions from CNNs as an extra constraint to reduce the number of decision variables; the second employs CNNs outputs to accelerate local search. Numerical results show that the proposed scheme can reduce 71.6% computation time with only 0.8% additional performance cost compared to the MILP solution, which provides high quality decision making in real-time.
120 - Wenbo Wang , Amir Leshem 2021
This paper focuses on the problem of joint beamforming control and power allocation in the ad-hoc mmWave network. Over the shared spectrum, a number of multi-input-multi-output links attempt to minimize their supply power by simultaneously finding th e locally optimal power allocation and beamformers in a self-interested manner. Our design considers a category of non-convex quality-of-service constraints, which are a function of the coupled strategies adopted by the mutually interfering ad-hoc links. We propose a two-stage, decentralized searching scheme, where the adaptation of power-levels and beamformer filters are performed in two separated sub-stages iteratively at each link. By introducing the analysis based on the generalized Nash equilibrium, we provide the theoretical proof of the convergence of our proposed power adaptation algorithm based on the local best response together with an iterative minimum mean square error receiver. Several transmit beamforming schemes requiring different levels of information exchange are compared. Our simulation results show that with a minimum-level requirement on the channel state information acquisition, a locally optimal transmit filter design based on the optimization of the local signal-to-interference-plus-noise ratio is able to achieve an acceptable tradeoff between link performance and the need for decentralization.
A head tracker is a crucial part of the head mounted display systems, as it tracks the head of the pilot in the plane/cockpit simulator. The operational flaws of head trackers are also dependent on different environmental conditions like different li ghting conditions and stray light interference. In this letter, an optical tracker has been employed to gather the 6-DoF data of head movements under different environmental conditions. Also, the effect of different environmental conditions and variation in distance between the receiver and optical transmitter on the 6-DoF data was analyzed.
In this paper, we address inter-beam inter-cell interference mitigation in 5G networks that employ millimeter-wave (mmWave), beamforming and non-orthogonal multiple access (NOMA) techniques. Those techniques play a key role in improving network capac ity and spectral efficiency by multiplexing users on both spatial and power domains. In addition, the coverage area of multiple beams from different cells can intersect, allowing more flexibility in user-cell association. However, the intersection of coverage areas also implies increased inter-beam inter-cell interference, i.e. interference among beams formed by nearby cells. Therefore, joint user-cell association and inter-beam power allocation stand as a promising solution to mitigate inter-beam, inter-cell interference. In this paper, we consider a 5G mmWave network and propose a reinforcement learning algorithm to perform joint user-cell association and inter-beam power allocation to maximize the sum rate of the network. The proposed algorithm is compared to a uniform power allocation that equally divides power among beams per cell. Simulation results present a performance enhancement of 13-30% in networks sum-rate corresponding to the lowest and highest traffic loads, respectively.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا