No Arabic abstract
In the era when the market segment of Internet of Things (IoT) tops the chart in various business reports, it is apparently envisioned that the field of medicine expects to gain a large benefit from the explosion of wearables and internet-connected sensors that surround us to acquire and communicate unprecedented data on symptoms, medication, food intake, and daily-life activities impacting ones health and wellness. However, IoT-driven healthcare would have to overcome many barriers, such as: 1) There is an increasing demand for data storage on cloud servers where the analysis of the medical big data becomes increasingly complex, 2) The data, when communicated, are vulnerable to security and privacy issues, 3) The communication of the continuously collected data is not only costly but also energy hungry, 4) Operating and maintaining the sensors directly from the cloud servers are non-trial tasks. This book chapter defined Fog Computing in the context of medical IoT. Conceptually, Fog Computing is a service-oriented intermediate layer in IoT, providing the interfaces between the sensors and cloud servers for facilitating connectivity, data transfer, and queryable local database. The centerpiece of Fog computing is a low-power, intelligent, wireless, embedded computing node that carries out signal conditioning and data analytics on raw data collected from wearables or other medical sensors and offers efficient means to serve telehealth interventions. We implemented and tested an fog computing system using the Intel Edison and Raspberry Pi that allows acquisition, computing, storage and communication of the various medical data such as pathological speech data of individuals with speech disorders, Phonocardiogram (PCG) signal for heart rate estimation, and Electrocardiogram (ECG)-based Q, R, S detection.
Industrial Fog computing deploys various industrial services, such as automatic monitoring/control and imminent failure detection, at the Fog Nodes (FNs) to improve the performance of industrial systems. Much effort has been made in the literature on the design of fog network architecture and computation offloading. This paper studies an equally important but much less investigated problem of service hosting where FNs are adaptively configured to host services for Sensor Nodes (SNs), thereby enabling corresponding tasks to be executed by the FNs. The problem of service hosting emerges because of the limited computational and storage resources at FNs, which limit the number of different types of services that can be hosted by an FN at the same time. Considering the variability of service demand in both temporal and spatial dimensions, when, where, and which services to host have to be judiciously decided to maximize the utility of the Fog computing network. Our proposed Fog configuration strategies are tailored to battery-powered FNs. The limited battery capacity of FNs creates a long-term energy budget constraint that significantly complicates the Fog configuration problem as it introduces temporal coupling of decision making across the timeline. To address all these challenges, we propose an online distributed algorithm, called Adaptive Fog Configuration (AFC), based on Lyapunov optimization and parallel Gibbs sampling. AFC jointly optimizes service hosting and task admission decisions, requiring only currently available system information while guaranteeing close-to-optimal performance compared to an oracle algorithm with full future information.
This is the first paper to address the topology structure of Job Edge-Fog interconnection network in the perspective of network creation game. A two level network creation game model is given, in which the first level is similar to the traditional network creation game with total length objective to other nodes. The second level adopts two types of cost functions, one is created based on the Jackson-Wolinsky type of distance based utility, another is created based on the Network-Only Cost in the IoT literature. We show the performance of this two level game (Price of Anarchy). This work discloses how the selfish strategies of each individual device can influence the global topology structure of the job edge-fog interconnection network and provides theoretical foundations of the IoT infrastructure construction. A significant advantage of this framework is that it can avoid solving the traditional expensive and impractical quadratic assignment problem, which was the typical framework to study this task. Furthermore, it can control the systematic performance based only on one or two cost parameters of the job edge-fog networks, independently and in a distributed way.
Today, wearable internet-of-things (wIoT) devices continuously flood the cloud data centers at an enormous rate. This increases a demand to deploy an edge infrastructure for computing, intelligence, and storage close to the users. The emerging paradigm of fog computing could play an important role to make wIoT more efficient and affordable. Fog computing is known as the cloud on the ground. This paper presents an end-to-end architecture that performs data conditioning and intelligent filtering for generating smart analytics from wearable data. In wIoT, wearable sensor devices serve on one end while the cloud backend offers services on the other end. We developed a prototype of smart fog gateway (a middle layer) using Intel Edison and Raspberry Pi. We discussed the role of the smart fog gateway in orchestrating the process of data conditioning, intelligent filtering, smart analytics, and selective transfer to the cloud for long-term storage and temporal variability monitoring. We benchmarked the performance of developed prototypes on real-world data from smart e-textile gloves. Results demonstrated the usability and potential of proposed architecture for converting the real-world data into useful analytics while making use of knowledge-based models. In this way, the smart fog gateway enhances the end-to-end interaction between wearables (sensor devices) and the cloud.
In the era of Internet of Things and with the explosive worldwide growth of electronic data volume, and associated need of processing, analysis, and storage of such humongous volume of data, it has now become mandatory to exploit the power of massively parallel architecture for fast computation. Cloud computing provides a cheap source of such computing framework for large volume of data for real-time applications. It is, therefore, not surprising to see that cloud computing has become a buzzword in the computing fraternity over the last decade. This book presents some critical applications in cloud frameworks along with some innovation design of algorithms and architecture for deployment in cloud environment. It is a valuable source of knowledge for researchers, engineers, practitioners, and graduate and doctoral students working in the field of cloud computing. It will also be useful for faculty members of graduate schools and universities.
Federated learning (FL) brings collaborative intelligence into industries without centralized training data to accelerate the process of Industry 4.0 on the edge computing level. FL solves the dilemma in which enterprises wish to make the use of data intelligence with security concerns. To accelerate industrial Internet of things with the further leverage of FL, existing achievements on FL are developed from three aspects: 1) define terminologies and elaborate a general framework of FL for accommodating various scenarios; 2) discuss the state-of-the-art of FL on fundamental researches including data partitioning, privacy preservation, model optimization, local model transportation, personalization, motivation mechanism, platform & tools, and benchmark; 3) discuss the impacts of FL from the economic perspective. To attract more attention from industrial academia and practice, a FL-transformed manufacturing paradigm is presented, and future research directions of FL are given and possible immediate applications in Industry 4.0 domain are also proposed.