No Arabic abstract
Low-power wide-area (LPWA) networks are attracting extensive attention because of their abilities to offer low-cost and massive connectivity to Internet of Things (IoT) devices distributed over wide geographical areas. This article provides a brief overview on the existing LPWA technologies and useful insights to aid the large-scale deployment of LPWA networks. Particularly, we first review the currently competing candidates of LPWA networks, such as narrowband IoT (NB-IoT) and long range (LoRa), in terms of technical fundamentals and large-scale deployment potential. Then we present two implementation examples on LPWA networks. By analyzing the field-test results, we identify several challenges that prevent LPWA technologies moving from the theory to wide-spread practice.
Low Power Wide Area (LPWA) networks are known to be highly vulnerable to external in-band interference in terms of packet collisions which may substantially degrade the system performance. In order to enhance the performance in such cases, the telegram splitting (TS) method has been proposed recently. This approach exploits the typical burstiness of the interference via forward error correction (FEC) and offers a substantial performance improvement compared to other methods for packet transmissions in LPWA networks. While it has been already demonstrated that the TS method benefits from knowledge on the current interference state at the receiver side, corresponding practical receiver algorithms of high performance are still missing. The modeling of the bursty interference via Markov chains leads to the optimal detector in terms of a-posteriori symbol error probability. However, this solution requires a high computational complexity, assumes an a-priori knowledge on the interference characteristics and lacks flexibility. We propose a further developed scheme with increased flexibility and introduce an approach to reduce its complexity while maintaining a close-to-optimum performance. In particular, the proposed low complexity solution substantially outperforms existing practical methods in terms of packet error rate and therefore is highly beneficial for practical LPWA network scenarios.
Due to the flexibility and low operational cost, dispatching unmanned aerial vehicles (UAVs) to collect information from distributed sensors is expected to be a promising solution in Internet of Things (IoT), especially for time-critical applications. How to maintain the information freshness is a challenging issue. In this paper, we investigate the fresh data collection problem in UAV-assisted IoT networks. Particularly, the UAV flies towards the sensors to collect status update packets within a given duration while maintaining a non-negative residual energy. We formulate a Markov Decision Process (MDP) to find the optimal flight trajectory of the UAV and transmission scheduling of the sensors that minimizes the weighted sum of the age of information (AoI). A UAV-assisted data collection algorithm based on deep reinforcement learning (DRL) is further proposed to overcome the curse of dimensionality. Extensive simulation results demonstrate that the proposed DRL-based algorithm can significantly reduce the weighted sum of the AoI compared to other baseline algorithms.
Internet of things wireless networking with long range, low power and low throughput is raising as a new paradigm enabling to connect trillions of devices efficiently. In such networks with low power and bandwidth devices, localization becomes more challenging. In this work we take a closer look at the underlying aspects of received signal strength indicator (RSSI) based localization in UNB long-range IoT networks such as Sigfox. Firstly, the RSSI has been used for fingerprinting localization where RSSI measurements of GPS anchor nodes have been used as landmarks to classify other nodes into one of the GPS nodes classes. Through measurements we show that a location classification accuracy of 100% is achieved when the classes of nodes are isolated. When classes are approaching each other, our measurements show that we can still achieve an accuracy of 85%. Furthermore, when the density of the GPS nodes is increasing, we can rely on peer-to-peer triangulation and thus improve the possibility of localizing nodes with an error less than 20m from 20% to more than 60% of the nodes in our measurement scenario. 90% of the nodes is localized with an error of less than 50m in our experiment with non-optimized anchor node locations.
Caching has been regarded as a promising technique to alleviate energy consumption of sensors in Internet of Things (IoT) networks by responding to users requests with the data packets stored in the edge caching node (ECN). For real-time applications in caching enabled IoT networks, it is essential to develop dynamic status update strategies to strike a balance between the information freshness experienced by users and energy consumed by the sensor, which, however, is not well addressed. In this paper, we first depict the evolution of information freshness, in terms of age of information (AoI), at each user. Then, we formulate a dynamic status update optimization problem to minimize the expectation of a long term accumulative cost, which jointly considers the users AoI and sensors energy consumption. To solve this problem, a Markov Decision Process (MDP) is formulated to cast the status updating procedure, and a model-free reinforcement learning algorithm is proposed, with which the challenge brought by the unknown of the formulated MDPs dynamics can be addressed. Finally, simulations are conducted to validate the convergence of our proposed algorithm and its effectiveness compared with the zero-wait baseline policy.
Recent years have witnessed the proliferation of Low-power Wide Area Networks (LPWANs) in the unlicensed band for various Internet-of-Things (IoT) applications. Due to the ultra-low transmission power and long transmission duration, LPWAN devices inevitably suffer from high power Cross Technology Interference (CTI), such as interference from Wi-Fi, coexisting in the same spectrum. To alleviate this issue, this paper introduces the Partial Symbol Recovery (PSR) scheme for improving the CTI resilience of LPWAN. We verify our idea on LoRa, a widely adopted LPWAN technique, as a proof of concept. At the PHY layer, although CTI has much higher power, its duration is relatively shorter compared with LoRa symbols, leaving part of a LoRa symbol uncorrupted. Moreover, due to its high redundancy, LoRa chips within a symbol are highly correlated. This opens the possibility of detecting a LoRa symbol with only part of the chips. By examining the unique frequency patterns in LoRa symbols with time-frequency analysis, our design effectively detects the clean LoRa chips that are free of CTI. This enables PSR to only rely on clean LoRa chips for successfully recovering from communication failures. We evaluate our PSR design with real-world testbeds, including SX1280 LoRa chips and USRP B210, under Wi-Fi interference in various scenarios. Extensive experiments demonstrate that our design offers reliable packet recovery performance, successfully boosting the LoRa packet reception ratio from 45.2% to 82.2% with a performance gain of 1.8 times.