Do you want to publish a course? Click here

Hardware Accelerated SDR Platform for Adaptive Air Interfaces

192   0   0.0 ( 0 )
 Added by Tarik Kazaz
 Publication date 2017
and research's language is English




Ask ChatGPT about the research

The future 5G wireless infrastructure will support any-to-any connectivity between densely deployed smart objects that form the emerging paradigm known as the Internet of Everything (IoE). Compared to traditional wireless networks that enable communication between devices using a single technology, 5G networks will need to support seamless connectivity between heterogeneous wireless objects and IoE networks. To tackle the complexity and versatility of future IoE networks, 5G will need to guarantee optimal usage of both spectrum and energy resources and further support technology-agnostic connectivity between objects. One way to realize this is to combine intelligent network control with adaptive software defined air interfaces. In this paper, a flexible and compact platform is proposed for on-the-fly composition of low-power adaptive air interfaces, based on hardware/software co-processing. Compared to traditional Software Defined Radio (SDR) systems that perform computationally-intensive signal processing algorithms in software, consume significantly power and have a large form factor, the proposed platform uses modern hybrid FPGA technology combined with novel ideas such as RF Network-on-Chip (RFNoC) and partial reconfiguration. The resulting system enables composition of reconfigurable air interfaces based on hardware/software co-processing on a single chip, allowing high processing throughput, at a smaller form factor and reduced power consumption.



rate research

Read More

Ultra-reliable Low-Latency Communication (URLLC) is a key feature of 5G systems. The quality of service (QoS) requirements imposed by URLLC are less than 10ms delay and less than $10^{-5}$ packet loss rate (PLR). To satisfy such strict requirements with minimal channel resource consumption, the devices need to accurately predict the channel quality and select Modulation and Coding Scheme (MCS) for URLLC in a proper way. This paper presents a novel real-time channel prediction system based on Software-Defined Radio that uses a neural network. The paper also describes and shares an open channel measurement dataset that can be used to compare various channel prediction approaches in different mobility scenarios in future research on URLLC
We introduce Air Learning, an open-source simulator, and a gym environment for deep reinforcement learning research on resource-constrained aerial robots. Equipped with domain randomization, Air Learning exposes a UAV agent to a diverse set of challenging scenarios. We seed the toolset with point-to-point obstacle avoidance tasks in three different environments and Deep Q Networks (DQN) and Proximal Policy Optimization (PPO) trainers. Air Learning assesses the policies performance under various quality-of-flight (QoF) metrics, such as the energy consumed, endurance, and the average trajectory length, on resource-constrained embedded platforms like a Raspberry Pi. We find that the trajectories on an embedded Ras-Pi are vastly different from those predicted on a high-end desktop system, resulting in up to 40% longer trajectories in one of the environments. To understand the source of such discrepancies, we use Air Learning to artificially degrade high-end desktop performance to mimic what happens on a low-end embedded system. We then propose a mitigation technique that uses the hardware-in-the-loop to determine the latency distribution of running the policy on the target platform (onboard compute on the aerial robot). A randomly sampled latency from the latency distribution is then added as an artificial delay within the training loop. Training the policy with artificial delays allows us to minimize the hardware gap (discrepancy in the flight time metric reduced from 37.73% to 0.5%). Thus, Air Learning with hardware-in-the-loop characterizes those differences and exposes how the onboard computes choice affects the aerial robots performance. We also conduct reliability studies to assess the effect of sensor failures on the learned policies. All put together, Air Learning enables a broad class of deep RL research on UAVs. The source code is available at:http://bit.ly/2JNAVb6.
Providing fault-tolerance for long-running GPU-intensive jobs requires application-specific solutions, and often involves saving the state of complex data structures spread among many graphics libraries. This work describes a mechanism for transparent GPU-independent checkpoint-restart of 3D graphics. The approach is based on a record-prune-replay paradigm: all OpenGL calls relevant to the graphics driver state are recorded; calls not relevant to the internal driver state as of the last graphics frame prior to checkpoint are discarded; and the remaining calls are replayed on restart. A previous approach for OpenGL 1.5, based on a shadow device driver, required more than 78,000 lines of OpenGL-specific code. In contrast, the new approach, based on record-prune-replay, is used to implement the same case in just 4,500 lines of code. The speed of this approach varies between 80 per cent and nearly 100 per cent of the speed of the native hardware acceleration for OpenGL 1.5, as measured when running the ioquake3 game under Linux. This approach has also been extended to demonstrate checkpointing of OpenGL 3.0 for the first time, with a demonstration for PyMol, for molecular visualization.
The field of transient astronomy has seen a revolution with the first gravitational-wave detections and the arrival of multi-messenger observations they enabled. Transformed by the first detection of binary black hole and binary neutron star mergers, computational demands in gravitational-wave astronomy are expected to grow by at least a factor of two over the next five years as the global network of kilometer-scale interferometers are brought to design sensitivity. With the increase in detector sensitivity, real-time delivery of gravitational-wave alerts will become increasingly important as an enabler of multi-messenger followup. In this work, we report a novel implementation and deployment of deep learning inference for real-time gravitational-wave data denoising and astrophysical source identification. This is accomplished using a generic Inference-as-a-Service model that is capable of adapting to the future needs of gravitational-wave data analysis. Our implementation allows seamless incorporation of hardware accelerators and also enables the use of commercial or private (dedicated) as-a-service computing. Based on our results, we propose a paradigm shift in low-latency and offline computing in gravitational-wave astronomy. Such a shift can address key challenges in peak-usage, scalability and reliability, and provide a data analysis platform particularly optimized for deep learning applications. The achieved sub-millisecond scale latency will also be relevant for any machine learning-based real-time control systems that may be invoked in the operation of near-future and next generation ground-based laser interferometers, as well as the front-end collection, distribution and processing of data from such instruments.
A cross-layer cognitive radio system is designed to support unicast and multicast traffic with integration of dynamic spectrum access (DSA), backpressure algorithm, and network coding for multi-hop networking. The full protocol stack that operates with distributed coordination and local information exchange is implemented with software-defined radios (SDRs) and assessed in a realistic test and evaluation (T&E) system based on a network emulation testbed. Without a common control channel, each SDR performs neighborhood discovery, spectrum sensing and channel estimation, and executes a distributed extension of backpressure algorithm that optimizes the spectrum utility (that represents link rates and traffic congestion) with joint DSA and routing. The backpressure algorithm is extended to support multicast traffic with network coding deployed over virtual queues (for multicast destinations). In addition to full rank decoding at destinations, rank deficient decoding is also considered to reduce the delay. Cognitive network functionalities are programmed with GNU Radio and Python modules are developed for different layers. USRP radios are used as RF front ends. A wireless network T&E system is presented to execute emulation tests, where radios communicate with each other through a wireless network emulator that controls physical channels according to path loss, fading, and topology effects. Emulation tests are presented for different topologies to evaluate the throughput, backlog and energy consumption. Results verify the SDR implementation and the joint effect of DSA, backpressure routing and network coding under realistic channel and radio hardware effects.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا