No Arabic abstract
To meet the requirements of high energy efficiency (EE) and large system capacity for the fifth-generation (5G) Internet of Things (IoT), the use of massive multiple-input multipleoutput (MIMO) technology has been launched in the massive IoT (mIoT) network, where a large number of devices are connected and scheduled simultaneously. This paper considers the energyefficient design of a multi-pair decode-and-forward relay-based IoT network, in which multiple sources simultaneously transmit their information to the corresponding destinations via a relay equipped with a large array. In order to obtain an accurate yet tractable expression of the EE, firstly, a closed-form expression of the EE is derived under an idealized simplifying assumption, in which the location of each device is known by the network. Then, an exact integral-based expression of the EE is derived under the assumption that the devices are randomly scattered following a uniform distribution and transmit power of the relay is equally shared among the destination devices. Furthermore, a simple yet efficient lower bound of the EE is obtained. Based on this, finally, a low-complexity energy-efficient resource allocation strategy of the mIoT network is proposed under the specific qualityof- service (QoS) constraint. The proposed strategy determines the near-optimal number of relay antennas, the near-optimal transmit power at the relay and near-optimal density of active mIoT device pairs in a given coverage area. Numerical results demonstrate the accuracy of the performance analysis and the efficiency of the proposed algorithms.
A major challenge that is currently faced in the design of applications for the Internet of Things (IoT) concerns with the optimal use of available energy resources given the battery lifetime of the IoT devices. The challenge is derived from the heterogeneity of the devices, in terms of their hardware and the provided functionalities (e.g data processing/communication). In this paper, we propose a novel method for (i) characterizing the parameters that influence energy consumption and (ii) validating the energy consumption of IoT devices against the systems energy-efficiency requirements (e.g. lifetime). Our approach is based on energy-aware models of the IoT applications design in the BIP (Behavior, Interaction, Priority) component framework. This allows for a detailed formal representation of the systems behavior and its subsequent validation, thus providing feedback for enhancements in the pre-deployment or pre-production stages. We illustrate our approach through a Building Management System, using well-known IoT devices running the Contiki OS that communicate by diverse IoT protocols (e.g. CoAP, MQTT). The results allow to derive tight bounds for the energy consumption in various device functionalities, as well as to validate lifetime requirements through Statistical Model Checking.
In this paper, we consider a light fidelity (LiFi)-enabled bidirectional Internet of Things (IoT) communication system, where visible light and infrared light are used in the downlink and uplink, respectively. In order to improve the energy efficiency (EE) of the bidirectional LiFi-IoT system, non-orthogonal multiple access (NOMA) with a quality-of-service (QoS)-guaranteed optimal power allocation (OPA) strategy is applied to maximize the EE of the system. We derive a closed-form OPA set based on the identification of the optimal decoding orders in both downlink and uplink channels, which can enable low-complexity power allocation. Moreover, we propose an adaptive channel and QoS-based user pairing approach by jointly considering users channel gains and QoS requirements. We further analyze the EE of the bidirectional LiFi-IoT system and the user outage probabilities (UOPs) of both downlink and uplink channels of the system. Extensive analytical and simulation results demonstrate the superiority of NOMA with OPA in comparison to orthogonal multiple access (OMA) and NOMA with typical channel-based power allocation strategies. It is also shown that the proposed adaptive channel and QoS-based user pairing approach greatly outperforms individual channel/QoS-based approaches, especially when users have diverse QoS requirements.
This paper considers the problem of time-difference-of-arrival (TDOA) source localization using possibly unreliable data collected by the Internet of Things (IoT) sensors in the error-prone environments. The Welsch loss function is integrated into a hardware realizable projection-type neural network (PNN) model, in order to enhance the robustness of location estimator to the erroneous measurements. For statistical efficiency, the formulation here is derived upon the underlying time-of-arrival composition via joint estimation of the source position and onset time, instead of the TDOA counterpart generated in the postprocessing of sensor-collected timestamps. The local stability conditions and implementation complexity of the proposed PNN model are also analyzed in detail. Simulation investigations demonstrate that our neurodynamic TDOA localization solution is capable of outperforming several existing schemes in terms of localization accuracy and computational efficiency.
Current radio frequency (RF) sensors at the Edge lack the computational resources to support practical, in-situ training for intelligent spectrum monitoring, and sensor data classification in general. We propose a solution via Deep Delay Loop Reservoir Computing (DLR), a processing architecture that supports general machine learning algorithms on compact mobile devices by leveraging delay-loop reservoir computing in combination with innovative electrooptical hardware. With both digital and photonic realizations of our design of the loops, DLR delivers reductions in form factor, hardware complexity and latency, compared to the State-of-the-Art (SoA). The main impact of the reservoir is to project the input data into a higher dimensional space of reservoir state vectors in order to linearly separate the input classes. Once the classes are well separated, traditionally complex, power-hungry classification models are no longer needed for the learning process. Yet, even with simple classifiers based on Ridge regression (RR), the complexity grows at least quadratically with the input size. Hence, the hardware reduction required for training on compact devices is in contradiction with the large dimension of state vectors. DLR employs a RR-based classifier to exceed the SoA accuracy, while further reducing power consumption by leveraging the architecture of parallel (split) loops. We present DLR architectures composed of multiple smaller loops whose state vectors are linearly combined to create a lower dimensional input into Ridge regression. We demonstrate the advantages of using DLR for two distinct applications: RF Specific Emitter Identification (SEI) for IoT authentication, and wireless protocol recognition for IoT situational awareness.
In graph signal processing, data samples are associated to vertices on a graph, while edge weights represent similarities between those samples. We propose a convex optimization problem to learn sparse well connected graphs from data. We prove that each edge weight in our solution is upper bounded by the inverse of the distance between data features of the corresponding nodes. We also show that the effective resistance distance between nodes is upper bounded by the distance between nodal data features. Thus, our proposed method learns a sparse well connected graph that encodes geometric properties of the data. We also propose a coordinate minimization algorithm that, at each iteration, updates an edge weight using exact minimization. The algorithm has a simple and low complexity implementation based on closed form expressions.