No Arabic abstract
We propose a decision triggered data transmission and collection (DTDTC) protocol for condition monitoring and anomaly detection in the industrial Internet of things (IIoT). In the IIoT, the collection, processing, encoding, and transmission of the sensor readings are usually not for the reconstruction of the original data but for decision making at the fusion center. By moving the decision making process to the local end devices, the amount of data transmission can be significantly reduced, especially when normal signals with positive decisions dominate in the whole life cycle and the fusion center is only interested in collecting the abnormal data. The proposed concept combines compressive sensing, machine learning, data transmission, and joint decision making. The sensor readings are encoded and transmitted to the fusion center only when abnormal signals with negative decisions are detected. All the abnormal signals from the end devices are gathered at the fusion center for a joint decision with feedback messages forwarded to the local actuators. The advantage of such an approach lies in that it can significantly reduce the volume of data to be transmitted through wireless links. Moreover, the introduction of compressive sensing can further reduce the dimension of data tremendously. An exemplary case, i.e., diesel engine condition monitoring, is provided to validate the effectiveness and efficiency of the proposed scheme compared to the conventional ones.
Big data production in industrial Internet of Things (IIoT) is evident due to the massive deployment of sensors and Internet of Things (IoT) devices. However, big data processing is challenging due to limited computational, networking and storage resources at IoT device-end. Big data analytics (BDA) is expected to provide operational- and customer-level intelligence in IIoT systems. Although numerous studies on IIoT and BDA exist, only a few studies have explored the convergence of the two paradigms. In this study, we investigate the recent BDA technologies, algorithms and techniques that can lead to the development of intelligent IIoT systems. We devise a taxonomy by classifying and categorising the literature on the basis of important parameters (e.g. data sources, analytics tools, analytics techniques, requirements, industrial analytics applications and analytics types). We present the frameworks and case studies of the various enterprises that have benefited from BDA. We also enumerate the considerable opportunities introduced by BDA in IIoT.We identify and discuss the indispensable challenges that remain to be addressed as future research directions as well.
Industrial Internet of Things (IIoT) revolutionizes the future manufacturing facilities by integrating the Internet of Things technologies into industrial settings. With the deployment of massive IIoT devices, it is difficult for the wireless network to support the ubiquitous connections with diverse quality-of-service (QoS) requirements. Although machine learning is regarded as a powerful data-driven tool to optimize wireless network, how to apply machine learning to deal with the massive IIoT problems with unique characteristics remains unsolved. In this paper, we first summarize the QoS requirements of the typical massive non-critical and critical IIoT use cases. We then identify unique characteristics in the massive IIoT scenario, and the corresponding machine learning solutions with its limitations and potential research directions. We further present the existing machine learning solutions for individual layer and cross-layer problems in massive IIoT. Last but not the least, we present a case study of massive access problem based on deep neural network and deep reinforcement learning techniques, respectively, to validate the effectiveness of machine learning in massive IIoT scenario.
This paper investigates the reinforcement learning for the relay selection in the delay-constrained buffer-aided networks. The buffer-aided relay selection significantly improves the outage performance but often at the price of higher latency. On the other hand, modern communication systems such as the Internet of Things often have strict requirement on the latency. It is thus necessary to find relay selection policies to achieve good throughput performance in the buffer-aided relay network while stratifying the delay constraint. With the buffers employed at the relays and delay constraints imposed on the data transmission, obtaining the best relay selection becomes a complicated high-dimensional problem, making it hard for the reinforcement learning to converge. In this paper, we propose the novel decision-assisted deep reinforcement learning to improve the convergence. This is achieved by exploring the a-priori information from the buffer-aided relay system. The proposed approaches can achieve high throughput subject to delay constraints. Extensive simulation results are provided to verify the proposed algorithms.
The Industrial Internet of Things (IIoT) offers promising opportunities to transform the operation of industrial systems and becomes a key enabler for future industries. Recently, artificial intelligence (AI) has been widely utilized for realizing intelligent IIoT applications where AI techniques require centralized data collection and processing. However, this is not always feasible in realistic scenarios due to the high scalability of modern IIoT networks and growing industrial data confidentiality. Federated Learning (FL), as an emerging collaborative AI approach, is particularly attractive for intelligent IIoT networks by coordinating multiple IIoT devices and machines to perform AI training at the network edge while helping protect user privacy. In this article, we provide a detailed overview and discussions of the emerging applications of FL in key IIoT services and applications. A case study is also provided to demonstrate the feasibility of FL in IIoT. Finally, we highlight a range of interesting open research topics that need to be addressed for the full realization of FL-IIoT in industries.
Industrial Fog computing deploys various industrial services, such as automatic monitoring/control and imminent failure detection, at the Fog Nodes (FNs) to improve the performance of industrial systems. Much effort has been made in the literature on the design of fog network architecture and computation offloading. This paper studies an equally important but much less investigated problem of service hosting where FNs are adaptively configured to host services for Sensor Nodes (SNs), thereby enabling corresponding tasks to be executed by the FNs. The problem of service hosting emerges because of the limited computational and storage resources at FNs, which limit the number of different types of services that can be hosted by an FN at the same time. Considering the variability of service demand in both temporal and spatial dimensions, when, where, and which services to host have to be judiciously decided to maximize the utility of the Fog computing network. Our proposed Fog configuration strategies are tailored to battery-powered FNs. The limited battery capacity of FNs creates a long-term energy budget constraint that significantly complicates the Fog configuration problem as it introduces temporal coupling of decision making across the timeline. To address all these challenges, we propose an online distributed algorithm, called Adaptive Fog Configuration (AFC), based on Lyapunov optimization and parallel Gibbs sampling. AFC jointly optimizes service hosting and task admission decisions, requiring only currently available system information while guaranteeing close-to-optimal performance compared to an oracle algorithm with full future information.