Do you want to publish a course? Click here

Efficient Detectors for Telegram Splitting based Transmission in Low Power Wide Area Networks with Bursty Interference

164   0   0.0 ( 0 )
 Added by Steven Kisseleff
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

Low Power Wide Area (LPWA) networks are known to be highly vulnerable to external in-band interference in terms of packet collisions which may substantially degrade the system performance. In order to enhance the performance in such cases, the telegram splitting (TS) method has been proposed recently. This approach exploits the typical burstiness of the interference via forward error correction (FEC) and offers a substantial performance improvement compared to other methods for packet transmissions in LPWA networks. While it has been already demonstrated that the TS method benefits from knowledge on the current interference state at the receiver side, corresponding practical receiver algorithms of high performance are still missing. The modeling of the bursty interference via Markov chains leads to the optimal detector in terms of a-posteriori symbol error probability. However, this solution requires a high computational complexity, assumes an a-priori knowledge on the interference characteristics and lacks flexibility. We propose a further developed scheme with increased flexibility and introduce an approach to reduce its complexity while maintaining a close-to-optimum performance. In particular, the proposed low complexity solution substantially outperforms existing practical methods in terms of packet error rate and therefore is highly beneficial for practical LPWA network scenarios.



rate research

Read More

Low-power wide-area (LPWA) networks are attracting extensive attention because of their abilities to offer low-cost and massive connectivity to Internet of Things (IoT) devices distributed over wide geographical areas. This article provides a brief overview on the existing LPWA technologies and useful insights to aid the large-scale deployment of LPWA networks. Particularly, we first review the currently competing candidates of LPWA networks, such as narrowband IoT (NB-IoT) and long range (LoRa), in terms of technical fundamentals and large-scale deployment potential. Then we present two implementation examples on LPWA networks. By analyzing the field-test results, we identify several challenges that prevent LPWA technologies moving from the theory to wide-spread practice.
Recent years have witnessed the proliferation of Low-power Wide Area Networks (LPWANs) in the unlicensed band for various Internet-of-Things (IoT) applications. Due to the ultra-low transmission power and long transmission duration, LPWAN devices inevitably suffer from high power Cross Technology Interference (CTI), such as interference from Wi-Fi, coexisting in the same spectrum. To alleviate this issue, this paper introduces the Partial Symbol Recovery (PSR) scheme for improving the CTI resilience of LPWAN. We verify our idea on LoRa, a widely adopted LPWAN technique, as a proof of concept. At the PHY layer, although CTI has much higher power, its duration is relatively shorter compared with LoRa symbols, leaving part of a LoRa symbol uncorrupted. Moreover, due to its high redundancy, LoRa chips within a symbol are highly correlated. This opens the possibility of detecting a LoRa symbol with only part of the chips. By examining the unique frequency patterns in LoRa symbols with time-frequency analysis, our design effectively detects the clean LoRa chips that are free of CTI. This enables PSR to only rely on clean LoRa chips for successfully recovering from communication failures. We evaluate our PSR design with real-world testbeds, including SX1280 LoRa chips and USRP B210, under Wi-Fi interference in various scenarios. Extensive experiments demonstrate that our design offers reliable packet recovery performance, successfully boosting the LoRa packet reception ratio from 45.2% to 82.2% with a performance gain of 1.8 times.
Large communication networks, e.g. Internet of Things (IoT), are known to be vulnerable to co-channel interference. One possibility to address this issue is the use of orthogonal multiple access (OMA) techniques. However, due to a potentially very long duty cycle, OMA is not well suited for such schemes. Instead, random medium access (RMA) appears more promising. An RMA scheme is based on transmission of short data packets with random scheduling, which is typically unknown to the receiver. The received signal, which consists of the overlapping packets, can be used for energy harvesting and powering of a relay device. Such an energy harvesting relay may utilize the energy for further information processing and uplink transmission. In this paper, we address the design of a simultaneous information and power transfer scheme based on randomly scheduled packet transmissions and reliable symbol detection. We formulate a prediction problem with the goal to maximize the harvested power for an RMA scenario. In order to solve this problem, we propose a new prediction method, which shows a significant performance improvement compared to the straightforward baseline scheme. Furthermore, we investigate the complexity of the proposed method and its vulnerability to imperfect channel state information.
Recent advances in Low-Power Wide-Area Networks have mitigated interference by using cloud assistance. Those methods transmit the RSSI samples and corrupted packets to the cloud to restore the correct message. However, the effectiveness of those methods is challenged by the high transmission data amount. This paper presents a novel method for interference mitigation in a Edge-Cloud collaborative manner, namely ECCR. It does not require transmitting RSSI sample any more, whose length is eight times of the packets. We demonstrate the disjointness of the bit errors of packets at the base stations via real-word experiments. ECCR leverages this to collaborate with multiple base stations for error recovery. Each base station detects and reports bit error locations to the cloud, then both error checking code and interfered packets from other receivers are utilized to restore correct packets. ECCR takes the advantages of both the global management ability of the cloud and the signal to perceive the benefit of each base station, and it is applicable to deployed LP-WAN systems (e.g. sx1280) without any extra hardware requirement. Experimental results show that ECCR is able to accurately decode packets when packets have nearly 51.76% corruption.
In multicell massive multiple-input multiple-output (MIMO) non-orthogonal multiple access (NOMA) networks, base stations (BSs) with multiple antennas deliver their radio frequency energy in the downlink, and Internet-of-Things (IoT) devices use their harvested energy to support uplink data transmission. This paper investigates the energy efficiency (EE) problem for multicell massive MIMO NOMA networks with wireless power transfer (WPT). To maximize the EE of the network, we propose a novel joint power, time, antenna selection, and subcarrier resource allocation scheme, which can properly allocate the time for energy harvesting and data transmission. Both perfect and imperfect channel state information (CSI) are considered, and their corresponding EE performance is analyzed. Under quality-of-service (QoS) requirements, an EE maximization problem is formulated, which is non-trivial due to non-convexity. We first adopt nonlinear fraction programming methods to convert the problem to be convex, and then, develop a distributed alternating direction method of multipliers (ADMM)- based approach to solve the problem. Simulation results demonstrate that compared to alternative methods, the proposed algorithm can converge quickly within fewer iterations, and can achieve better EE performance.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا