ترغب بنشر مسار تعليمي؟ اضغط هنا

Resource Constrained Neural Networks for 5G Direction-of-Arrival Estimation in Micro-controllers

236   0   0.0 ( 0 )
 نشر من قبل Shivam Chandhok
 تاريخ النشر 2021
  مجال البحث هندسة إلكترونية
والبحث باللغة English




اسأل ChatGPT حول البحث

With the introduction of shared spectrum sensing and beam-forming based multi-antenna transceivers, 5G networks demand spectrum sensing to identify opportunities in time, frequency, and spatial domains. Narrow beam-forming makes it difficult to have spatial sensing (direction-of-arrival, DoA, estimation) in a centralized manner, and with the evolution of paradigms such as artificial intelligence of Things (AIOT), ultra-reliable low latency communication (URLLC) services and distributed networks, intelligence for edge devices (Edge-AI) is highly desirable. It helps to reduce the data-communication overhead compared to cloud-AI-centric networks and is more secure and free from scalability limitations. However, achieving desired functional accuracy is a challenge on edge devices such as microcontroller units (MCU) due to area, memory, and power constraints. In this work, we propose low complexity neural network-based algorithm for accurate DoA estimation and its efficient mapping on the off-the-self MCUs. An ad-hoc graphical-user interface (GUI) is developed to configure the STM32 NUCLEO-H743ZI2 MCU with the proposed algorithm and to validate its functionality. The performance of the proposed algorithm is analyzed for different signal-to-noise ratios (SNR), word-length, the number of antennas, and DoA resolution. In-depth experimental results show that it outperforms the conventional statistical spatial sensing approach.

قيم البحث

اقرأ أيضاً

The problem of estimating the number of sources and their angles of arrival from a single antenna array observation has been an active area of research in the signal processing community for the last few decades. When the number of sources is large, the maximum likelihood estimator is intractable due to its very high complexity, and therefore alternative signal processing methods have been developed with some performance loss. In this paper, we apply a deep neural network (DNN) approach to the problem and analyze its advantages with respect to signal processing algorithms. We show that an appropriate designed network can attain the maximum likelihood performance with feasible complexity and outperform other feasible signal processing estimation methods over various signal to noise ratios and array response inaccuracies.
In this paper, we show that a multi-mode antenna (MMA) is an interesting alternative to a conventional phased antenna array for direction-of-arrival (DoA) estimation. By MMA we mean a single physical radiator with multiple ports, which excite differe nt characteristic modes. In contrast to phased arrays, a closed-form mathematical model of the antenna response, like a steering vector, is not straightforward to define for MMAs. Instead one has to rely on calibration measurement or electromagnetic field (EMF) simulation data, which is discrete. To perform DoA estimation, array interpolation technique (AIT) and wavefield modeling (WM) are suggested as methods with inherent interpolation capabilities, fully taking antenna nonidealities like mutual coupling into account. We present a non-coherent DoA estimator for low-cost receivers and show how coherent DoA estimation and joint DoA and polarization estimation can be performed with MMAs. Utilizing these methods, we assess the DoA estimation performance of an MMA prototype in simulations for both 2D and 3D cases. The results show that WM outperforms AIT for high SNR. Coherent estimation is superior to non-coherent, especially in 3D, because non-coherent suffers from estimation ambiguities. In conclusion, DoA estimation with a single MMA is feasible and accurate.
In this article, we study a Radio Resource Allocation (RRA) that was formulated as a non-convex optimization problem whose main aim is to maximize the spectral efficiency subject to satisfaction guarantees in multiservice wireless systems. This probl em has already been previously investigated in the literature and efficient heuristics have been proposed. However, in order to assess the performance of Machine Learning (ML) algorithms when solving optimization problems in the context of RRA, we revisit that problem and propose a solution based on a Reinforcement Learning (RL) framework. Specifically, a distributed optimization method based on multi-agent deep RL is developed, where each agent makes its decisions to find a policy by interacting with the local environment, until reaching convergence. Thus, this article focuses on an application of RL and our main proposal consists in a new deep RL based approach to jointly deal with RRA, satisfaction guarantees and Quality of Service (QoS) constraints in multiservice celular networks. Lastly, through computational simulations we compare the state-of-art solutions of the literature with our proposal and we show a near optimal performance of the latter in terms of throughput and outage rate.
In this paper, the problem of opportunistic spectrum sharing for the next generation of wireless systems empowered by the cloud radio access network (C-RAN) is studied. More precisely, low-priority users employ cooperative spectrum sensing to detect a vacant portion of the spectrum that is not currently used by high-priority users. The design of the scheme is to maximize the overall throughput of the low-priority users while guaranteeing the quality of service of the high-priority users. This objective is attained by optimally adjusting spectrum sensing time with respect to imposed target probabilities of detection and false alarm as well as dynamically allocating and assigning C-RAN resources, i.e., transmit powers, sub-carriers, remote radio heads (RRHs), and base-band units. The presented optimization problem is non-convex and NP-hard that is extremely hard to tackle directly. To solve the problem, a low-complex iterative approach is proposed in which sensing time, user association parameters and transmit powers of RRHs are alternatively assigned and optimized at every step. Numerical results are then provided to demonstrate the necessity of performing sensing time adjustment in such systems as well as balancing the sensing-throughput tradeoff.
68 - Tom Tirer , Oded Bialer 2020
Estimating the directions of arrival (DOAs) of multiple sources from a single snapshot obtained by a coherent antenna array is a well-known problem, which can be addressed by sparse signal reconstruction methods, where the DOAs are estimated from the peaks of the recovered high-dimensional signal. In this paper, we consider a more challenging DOA estimation task where the array is composed of non-coherent sub-arrays (i.e., sub-arrays that observe different unknown phase shifts due to using low-cost unsynchronized local oscillators). We formulate this problem as the reconstruction of a joint sparse and low-rank matrix and solve its convex relaxation. While the DOAs can be estimated from the solution of the convex problem, we further show how an improvement is obtained if instead one estimates from this solution the phase shifts, creates phase-corrected observations and applies another final (plain, coherent) sparsity-based DOA estimation. Numerical experiments show that the proposed approach outperforms strategies that are based on non-coherent processing of the sub-arrays as well as other sparsity-based methods.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا