ترغب بنشر مسار تعليمي؟ اضغط هنا

Radiating wireless power transfer (WPT) brings forth the possibility to cost-efficiently charge wireless devices without requiring a wiring infrastructure. As such, it is expected to play a key role in the deployment of limited-battery communicating devices, as part of the 6G enabled Internet-of-Everything (IoE) vision. To date, radiating WPT technologies are mainly studied and designed assuming that the devices are located in the far-field region of the power radiating antenna, resulting in a relatively low energy transfer efficiency. However, with the transition of 6G systems to mmWave frequencies combined with the usage of large-scale antennas, future WPT devices are likely to operate in the radiating near-field (Fresnel) region. In this article, we provide an overview of the opportunities and challenges which arise from radiating near-field WPT. In particular, we discuss about the possibility to realize beam focusing in near-field radiating conditions, and highlight its possible implications for WPT in future {IoE} networks. Besides, we overview some of the design challenges and research directions which arise from this emerging paradigm, including its simultaneous operation with wireless communications, radiating waveform considerations, hardware aspects, and operation with typical antenna architectures.
We propose a computationally efficient algorithm for seizure detection. Instead of using a purely data-driven approach, we develop a hybrid model-based/data-driven method, combining convolutional neural networks with factor graph inference. On the CH B-MIT dataset, we demonstrate that the proposed method can generalize well in a 6 fold leave-4-patientout evaluation. Moreover, it is shown that our algorithm can achieve as much as 5% absolute improvement in performance compared to previous data-driven methods. This is achieved while the computational complexity of the proposed technique is a fraction of the complexity of prior work, making it suitable for real-time seizure detection.
Real-time state estimation of dynamical systems is a fundamental task in signal processing and control. For systems that are well-represented by a fully known linear Gaussian state space (SS) model, the celebrated Kalman filter (KF) is a low complexi ty optimal solution. However, both linearity of the underlying SS model and accurate knowledge of it are often not encountered in practice. Here, we present KalmanNet, a real-time state estimator that learns from data to carry out Kalman filtering under non-linear dynamics with partial information. By incorporating the structural SS model with a dedicated recurrent neural network module in the flow of the KF, we retain data efficiency and interpretability of the classic algorithm while implicitly learning complex dynamics from data. We numerically demonstrate that KalmanNet overcomes nonlinearities and model mismatch, outperforming classic filtering methods operating with both mismatched and accurate domain knowledge.
Dual function radar communications (DFRC) systems are attractive technologies for autonomous vehicles, which utilize electromagnetic waves to constantly sense the environment while simultaneously communicating with neighbouring devices. An emerging a pproach to implement DFRC systems is to embed information in radar waveforms via index modulation (IM). Implementation of DFRC schemes in vehicular systems gives rise to strict constraints in terms of cost, power efficiency, and hardware complexity. In this paper, we extend IM-based DFRC systems to utilize sparse arrays and frequency modulated continuous waveforms (FMCWs), which are popular in automotive radar for their simplicity and low hardware complexity. The proposed FMCW-based radar-communications system (FRaC) operates at reduced cost and complexity by transmitting with a reduced number of radio frequency modules, combined with narrowband FMCW signalling. This is achieved via array sparsification in transmission, formulating a virtual multiple-input multiple-output array by combining the signals in one coherent processing interval, in which the narrowband waveforms are transmitted in a randomized manner. Performance analysis and numerical results show that the proposed radar scheme achieves similar resolution performance compared with a wideband radar system operating with a large receive aperture, while requiring less hardware overhead. For the communications subsystem, FRaC achieves higher rates and improved error rates compared to dual-function signalling based on conventional phase modulation.
Large antenna arrays and high-frequency bands are two key features of future wireless communication systems. The combination of large-scale antennas with high transmission frequencies often results in the communicating devices operating in the near-f ield (Fresnel) region. In this paper, we study the potential of beam focusing, feasible in near-field operation, in facilitating high-rate multi-user downlink multiple-input multiple-output (MIMO) systems. As the ability to achieve beam focusing is dictated by the transmit antenna, we study near-field signaling considering different antenna structures, including fully-digital architectures, hybrid phase shifter-based precoders, and the emerging dynamic metasurface antenna (DMA) architecture for massive MIMO arrays. We first provide a mathematical model to characterize near-field wireless channels as well as the transmission pattern for the considered antenna architectures. Then, we formulate the beam focusing problem for the goal of maximizing the achievable sum-rate in multi-user networks. We propose efficient solutions based on the sum-rate maximization task for fully-digital, (phase shifters based-) hybrid and DMA architectures. Simulation results show the feasibility of the proposed beam focusing scheme for both single- and multi-user scenarios. In particular, the designed focused beams are such that users residing at the same angular direction can communicate reliably without interfering with each other, which is not achievable using conventional far-field beam steering.
Current discussions on the sixth Generation (6G) of wireless communications are envisioning future networks as a unified communication, sensing, and computing platform that intelligently enables diverse services, ranging from immersive to mission cri tical applications. The recently conceived concept of the smart radio environment, enabled by Reconfigurable Intelligent Surfaces (RISs), contributes towards this intelligent networking trend, offering programmable propagation of information-bearing signals, which can be jointly optimized with transceiver operations. Typical RIS implementations include metasurfaces with nearly passive meta-atoms, allowing to solely reflect the incident wave in an externally controllable way. However, this purely reflective nature induces significant challenges in the RIS orchestration from the wireless network. For example, channel estimation, which is essential for coherent communications in RIS-empowered wireless networks, is quite challenging with the available RIS designs. This article introduces the concept of Hybrid reflecting and sensing RISs (HRISs), which enables metasurfaces to reflect the impinging signal in a controllable manner, while simultaneously sense a portion of it. The sensing capability of HRISs facilitates various network management functionalities, including channel estimation and localization. We discuss a hardware design for HRISs and detail a full-wave proof-of-concept. We highlight their distinctive properties in comparison to reflective RISs and active relays, and present a simulation study evaluating the HRIS capability for performing channel estimation. Future research challenges and opportunities arising from the concept of HRISs are presented.
Deep neural networks (DNNs) based digital receivers can potentially operate in complex environments. However, the dynamic nature of communication channels implies that in some scenarios, DNN-based receivers should be periodically retrained in order t o track temporal variations in the channel conditions. To this aim, frequent transmissions of lengthy pilot sequences are generally required, at the cost of substantial overhead. In this work we propose a DNN-aided symbol detector, Meta-ViterbiNet, that tracks channel variations with reduced overhead by integrating three complementary techniques: 1) We leverage domain knowledge to implement a model-based/data-driven equalizer, ViterbiNet, that operates with a relatively small number of trainable parameters; 2) We tailor a meta-learning procedure to the symbol detection problem, optimizing the hyperparameters of the learning algorithm to facilitate rapid online adaptation; and 3) We adopt a decision-directed approach based on coded communications to enable online training with short-length pilot blocks. Numerical results demonstrate that Meta-ViterbiNet operates accurately in rapidly-varying channels, outperforming the previous best approach, based on ViterbiNet or conventional recurrent neural networks without meta-learning, by a margin of up to 0.6dB in bit error rate in various challenging scenarios.
Recent advances in the fabrication and experimentation of Reconfigurable Intelligent Surfaces (RISs) have motivated the concept of the smart radio environment, according to which the propagation of information-bearing waveforms in the wireless medium is amenable to programmability. Although the vast majority of recent experimental research on RIS-empowered wireless communications gravitates around narrowband beamforming in quasi-free space, RISs are foreseen to revolutionize wideband wireless connectivity in dense urban as well as indoor scenarios, which are usually characterized as strongly reverberant environments exhibiting severe multipath conditions. In this article, capitalizing on recent physics-driven experimental explorations of RIS-empowered wave propagation control in complex scattering cavities, we identify the potential of the spatiotemporal control offered by RISs to boost wireless communications in rich scattering channels via two case studies. First, an RIS is deployed to shape the multipath channel impulse response, which is shown to enable higher achievable communication rates. Second, the RIS-tunable propagation environment is leveraged as an analog multiplexer to localize non-cooperative objects using wave fingerprints, even when they are outside the line of sight. Future research challenges and opportunities in the algorithmic design and experimentation of smart rich scattering wireless environments enabled by RISs for sixth Generation (6G) wireless communications are discussed.
We present an introduction to model-based machine learning for communication systems. We begin by reviewing existing strategies for combining model-based algorithms and machine learning from a high level perspective, and compare them to the conventio nal deep learning approach which utilizes established deep neural network (DNN) architectures trained in an end-to-end manner. Then, we focus on symbol detection, which is one of the fundamental tasks of communication receivers. We show how the different strategies of conventional deep architectures, deep unfolding, and DNN-aided hybrid algorithms, can be applied to this problem. The last two approaches constitute a middle ground between purely model-based and solely DNN-based receivers. By focusing on this specific task, we highlight the advantages and drawbacks of each strategy, and present guidelines to facilitate the design of future model-based deep learning systems for communications.
Signal processing, communications, and control have traditionally relied on classical statistical modeling techniques. Such model-based methods utilize mathematical formulations that represent the underlying physics, prior information and additional domain knowledge. Simple classical models are useful but sensitive to inaccuracies and may lead to poor performance when real systems display complex or dynamic behavior. On the other hand, purely data-driven approaches that are model-agnostic are becoming increasingly popular as datasets become abundant and the power of modern deep learning pipelines increases. Deep neural networks (DNNs) use generic architectures which learn to operate from data, and demonstrate excellent performance, especially for supervised problems. However, DNNs typically require massive amounts of data and immense computational resources, limiting their applicability for some signal processing scenarios. We are interested in hybrid techniques that combine principled mathematical models with data-driven systems to benefit from the advantages of both approaches. Such model-based deep learning methods exploit both partial domain knowledge, via mathematical structures designed for specific problems, as well as learning from limited data. In this article we survey the leading approaches for studying and designing model-based deep learning systems. We divide hybrid model-based/data-driven systems into categories based on their inference mechanism. We provide a comprehensive review of the leading approaches for combining model-based algorithms with deep learning in a systematic manner, along with concrete guidelines and detailed signal processing oriented examples from recent literature. Our aim is to facilitate the design and study of future systems on the intersection of signal processing and machine learning that incorporate the advantages of both domains.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا