ترغب بنشر مسار تعليمي؟ اضغط هنا

Electrosense: Open and Big Spectrum Data

60   0   0.0 ( 0 )
 نشر من قبل Sreeraj Rajendran
 تاريخ النشر 2017
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

While the radio spectrum allocation is well regulated, there is little knowledge about its actual utilization over time and space. This limitation hinders taking effective actions in various applications including cognitive radios, electrosmog monitoring, and law enforcement. We introduce Electrosense, an initiative that seeks a more efficient, safe and reliable monitoring of the electromagnetic space by improving the accessibility of spectrum data for the general public. A collaborative spectrum monitoring network is designed that monitors the spectrum at large scale with low-cost spectrum sensing nodes. The large set of data is stored and processed in a big data architecture and provided back to the community with an open spectrum data as a service model, that allows users to build diverse and novel applications with different requirements. We illustrate useful usage scenarios of the Electrosense data.



قيم البحث

اقرأ أيضاً

Web spectrum monitoring systems based on crowdsourcing have recently gained popularity. These systems are however limited to applications of interest for governamental organizationsor telecom providers, and only provide aggregated information about s pectrum statistics. Theresult is that there is a lack of interest for layman users to participate, which limits its widespreaddeployment. We present Electrosense+ which addresses this challenge and creates a general-purpose and open platform for spectrum monitoring using low-cost, embedded, and software-defined spectrum IoT sensors. Electrosense+ allows users to remotely decode specific parts ofthe radio spectrum. It builds on the centralized architecture of its predecessor, Electrosense, forcontrolling and monitoring the spectrum IoT sensors, but implements a real-time and peer-to-peercommunication system for scalable spectrum data decoding. We propose different mechanismsto incentivize the participation of users for deploying new sensors and keep them operational inthe Electrosense network. As a reward for the user, we propose an incentive accounting systembased on virtual tokens to encourage the participants to host IoT sensors. We present the newElectrosense+ system architecture and evaluate its performance at decoding various wireless sig-nals, including FM radio, AM radio, ADS-B, AIS, LTE, and ACARS.
The explosion of 5G networks and the Internet of Things will result in an exceptionally crowded RF environment, where techniques such as spectrum sharing and dynamic spectrum access will become essential components of the wireless communication proce ss. In this vision, wireless devices must be able to (i) learn to autonomously extract knowledge from the spectrum on-the-fly; and (ii) react in real time to the inferred spectrum knowledge by appropriately changing communication parameters, including frequency band, symbol modulation, coding rate, among others. Traditional CPU-based machine learning suffers from high latency, and requires application-specific and computationally-intensive feature extraction/selection algorithms. In this paper, we present RFLearn, the first system enabling spectrum knowledge extraction from unprocessed I/Q samples by deep learning directly in the RF loop. RFLearn provides (i) a complete hardware/software architecture where the CPU, radio transceiver and learning/actuation circuits are tightly connected for maximum performance; and (ii) a learning circuit design framework where the latency vs. hardware resource consumption trade-off can explored. We implement and evaluate the performance of RFLearn on custom software-defined radio built on a system-on-chip (SoC) ZYNQ-7000 device mounting AD9361 radio transceivers and VERT2450 antennas. We showcase the capabilities of RFLearn by applying it to solving the fundamental problems of modulation and OFDM parameter recognition. Experimental results reveal that RFLearn decreases latency and power by about 17x and 15x with respect to a software-based solution, with a comparatively low hardware resource consumption.
The Internet of Underwater Things (IoUT) is an emerging communication ecosystem developed for connecting underwater objects in maritime and underwater environments. The IoUT technology is intricately linked with intelligent boats and ships, smart sho res and oceans, automatic marine transportations, positioning and navigation, underwater exploration, disaster prediction and prevention, as well as with intelligent monitoring and security. The IoUT has an influence at various scales ranging from a small scientific observatory, to a midsized harbor, and to covering global oceanic trade. The network architecture of IoUT is intrinsically heterogeneous and should be sufficiently resilient to operate in harsh environments. This creates major challenges in terms of underwater communications, whilst relying on limited energy resources. Additionally, the volume, velocity, and variety of data produced by sensors, hydrophones, and cameras in IoUT is enormous, giving rise to the concept of Big Marine Data (BMD), which has its own processing challenges. Hence, conventional data processing techniques will falter, and bespoke Machine Learning (ML) solutions have to be employed for automatically learning the specific BMD behavior and features facilitating knowledge extraction and decision support. The motivation of this paper is to comprehensively survey the IoUT, BMD, and their synthesis. It also aims for exploring the nexus of BMD with ML. We set out from underwater data collection and then discuss the family of IoUT data communication techniques with an emphasis on the state-of-the-art research challenges. We then review the suite of ML solutions suitable for BMD handling and analytics. We treat the subject deductively from an educational perspective, critically appraising the material surveyed.
The concept of multi-access edge computing (MEC) has been recently introduced to supplement cloud computing by deploying MEC servers to the network edge so as to reduce the network delay and alleviate the load on cloud data centers. However, compared to a resourceful cloud, an MEC server has limited resources. When each MEC server operates independently, it cannot handle all of the computational and big data demands stemming from the users devices. Consequently, the MEC server cannot provide significant gains in overhead reduction due to data exchange between users devices and remote cloud. Therefore, joint computing, caching, communication, and control (4C) at the edge with MEC server collaboration is strongly needed for big data applications. In order to address these challenges, in this paper, the problem of joint 4C in big data MEC is formulated as an optimization problem whose goal is to maximize the bandwidth saving while minimizing delay, subject to the local computation capability of user devices, computation deadline, and MEC resource constraints. However, the formulated problem is shown to be non-convex. To make this problem convex, a proximal upper bound problem of the original formulated problem that guarantees descent to the original problem is proposed. To solve the proximal upper bound problem, a block successive upper bound minimization (BSUM) method is applied. Simulation results show that the proposed approach increases bandwidth-saving and minimizes delay while satisfying the computation deadlines.
119 - Xiaohu Ge , Tao Han , Yan Zhang 2014
Two-tier femtocell networks is an efficient communication architecture that significantly improves throughput in indoor environments with low power consumption. Traditionally, a femtocell network is usually configured to be either completely open or completely closed in that its channels are either made available to all users or used by its own users only. This may limit network flexibility and performance. It is desirable for owners of femtocell base stations if a femtocell can partially open its channels for external users access. In such scenarios, spectrum and energy efficiency becomes a critical issue in the design of femtocell network protocols and structure. In this paper, we conduct performance analysis for two-tier femtocell networks with partially open channels. In particular, we build a Markov chain to model the channel access in the femtocell network and then derive the performance metrics in terms of the blocking probabilities. Based on stationary state probabilities derived by Markov chain models, spectrum and energy efficiency are modeled and analyzed under different scenarios characterized by critical parameters, including number of femtocells in a macrocell, average number of users, and number of open channels in a femtocell. Numerical and Monte-Carlo (MC) simulation results indicate that the number of open channels in a femtocell has an adverse impact on the spectrum and energy efficiency of two-tier femtocell networks. Results in this paper provide guidelines for trading off spectrum and energy efficiency of two-tier femtocell networks by configuring different numbers of open channels in a femtocell.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا