Do you want to publish a course? Click here

Localization with Deep Neural Networks using mmWave Ray Tracing Simulations

87   0   0.0 ( 0 )
 Added by Udita Bhattacherjee
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

The world is moving towards faster data transformation with more efficient localization of a user being the preliminary requirement. This work investigates the use of a deep learning technique for wireless localization, considering both millimeter-wave (mmWave) and sub-6 GHz frequencies. The capability of learning a new neural network model makes the localization process easier and faster. In this study, a Deep Neural Network (DNN) was used to localize User Equipment (UE) in two static scenarios. We propose two different methods to train a neural network, one using channel parameters (features) and another using a channel response vector and compare their performances using preliminary computer simulations. We observe that the former approach produces high localization accuracy considering that all of the users have a fixed number of multipath components (MPCs), this method is reliant on the number of MPCs. On the other hand, the latter approach is independent of the MPCs, but it performs relatively poorly compared to the first approach.

rate research

Read More

Millimeter wave channels exhibit structure that allows beam alignment with fewer channel measurements than exhaustive beam search. From a compressed sensing (CS) perspective, the received channel measurements are usually obtained by multiplying a CS matrix with a sparse representation of the channel matrix. Due to the constraints imposed by analog processing, designing CS matrices that efficiently exploit the channel structure is, however, challenging. In this paper, we propose an end-to-end deep learning technique to design a structured CS matrix that is well suited to the underlying channel distribution, leveraging both sparsity and the particular spatial structure that appears in vehicular channels. The channel measurements acquired with the designed CS matrix are then used to predict the best beam for link configuration. Simulation results for vehicular communication channels indicate that our deep learning-based approach achieves better beam alignment than standard CS techniques that use the random phase shift-based design.
99 - Zhen Fu , Bo Wang , Fei Chen 2021
Hearing-impaired listeners usually have troubles attending target talker in multi-talker scenes, even with hearing aids (HAs). The problem can be solved with eye-gaze steering HAs, which requires listeners eye-gazing on the target. In a situation where head rotates, eye-gaze is subject to both behaviors of saccade and head rotation. However, existing methods of eye-gaze estimation did not work reliably, since the listeners strategy of eye-gaze varies and measurements of the two behaviors were not properly combined. Besides, existing methods were based on hand-craft features, which could overlook some important information. In this paper, a head-fixed and a head-free experiments were conducted. We used horizontal electrooculography (HEOG) and neck electromyography (NEMG), which separately measured saccade and head rotation to commonly estimate eye-gaze. Besides traditional classifier and hand-craft features, deep neural networks (DNN) were introduced to automatically extract features from intact waveforms. Evaluation results showed that when the input was HEOG with inertial measurement unit, the best performance of our proposed DNN classifiers achieved 93.3%; and when HEOG was with NEMG together, the accuracy reached 72.6%, higher than that with HEOG (about 71.0%) or NEMG (about 35.7%) alone. These results indicated the feasibility to estimate eye-gaze with HEOG and NEMG.
This paper presents DeepIA, a deep learning solution for faster and more accurate initial access (IA) in 5G millimeter wave (mmWave) networks when compared to conventional IA. By utilizing a subset of beams in the IA process, DeepIA removes the need for an exhaustive beam search thereby reducing the beam sweep time in IA. A deep neural network (DNN) is trained to learn the complex mapping from the received signal strengths (RSSs) collected with a reduced number of beams to the optimal spatial beam of the receiver (among a larger set of beams). In test time, DeepIA measures RSSs only from a small number of beams and runs the DNN to predict the best beam for IA. We show that DeepIA reduces the IA time by sweeping fewer beams and significantly outperforms the conventional IAs beam prediction accuracy in both line of sight (LoS) and non-line of sight (NLoS) mmWave channel conditions.
Future cellular networks that utilize millimeter wave signals provide new opportunities in positioning and situational awareness. Large bandwidths combined with large antenna arrays provide unparalleled delay and angle resolution, allowing high accuracy localization but also building up a map of the environment. Even the most basic filter intended for simultaneous localization and mapping exhibits high computational overhead since the methods rely on sigma point or particle-based approximations. In this paper, a first order Taylor series based Gaussian approximation of the filtering distribution is used and it is demonstrated that the developed extended Kalman probability hypothesis density filter is computationally very efficient. In addition, the results imply that efficiency does not come with the expense of estimation accuracy since the method nearly achieves the position error bound.
We study the problem of optimal power allocation in a single-hop ad hoc wireless network. In solving this problem, we propose a hybrid neural architecture inspired by the algorithmic unfolding of the iterative weighted minimum mean squared error (WMMSE) method, that we denote as unfolded WMMSE (UWMMSE). The learnable weights within UWMMSE are parameterized using graph neural networks (GNNs), where the time-varying underlying graphs are given by the fading interference coefficients in the wireless network. These GNNs are trained through a gradient descent approach based on multiple instances of the power allocation problem. Once trained, UWMMSE achieves performance comparable to that of WMMSE while significantly reducing the computational complexity. This phenomenon is illustrated through numerical experiments along with the robustness and generalization to wireless networks of different densities and sizes.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا