ترغب بنشر مسار تعليمي؟ اضغط هنا

Job Edge-Fog Interconnection Network Creation Game in Internet of Things

204   0   0.0 ( 0 )
 نشر من قبل Rupei Xu
 تاريخ النشر 2019
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

This is the first paper to address the topology structure of Job Edge-Fog interconnection network in the perspective of network creation game. A two level network creation game model is given, in which the first level is similar to the traditional network creation game with total length objective to other nodes. The second level adopts two types of cost functions, one is created based on the Jackson-Wolinsky type of distance based utility, another is created based on the Network-Only Cost in the IoT literature. We show the performance of this two level game (Price of Anarchy). This work discloses how the selfish strategies of each individual device can influence the global topology structure of the job edge-fog interconnection network and provides theoretical foundations of the IoT infrastructure construction. A significant advantage of this framework is that it can avoid solving the traditional expensive and impractical quadratic assignment problem, which was the typical framework to study this task. Furthermore, it can control the systematic performance based only on one or two cost parameters of the job edge-fog networks, independently and in a distributed way.



قيم البحث

اقرأ أيضاً

Today, wearable internet-of-things (wIoT) devices continuously flood the cloud data centers at an enormous rate. This increases a demand to deploy an edge infrastructure for computing, intelligence, and storage close to the users. The emerging paradi gm of fog computing could play an important role to make wIoT more efficient and affordable. Fog computing is known as the cloud on the ground. This paper presents an end-to-end architecture that performs data conditioning and intelligent filtering for generating smart analytics from wearable data. In wIoT, wearable sensor devices serve on one end while the cloud backend offers services on the other end. We developed a prototype of smart fog gateway (a middle layer) using Intel Edison and Raspberry Pi. We discussed the role of the smart fog gateway in orchestrating the process of data conditioning, intelligent filtering, smart analytics, and selective transfer to the cloud for long-term storage and temporal variability monitoring. We benchmarked the performance of developed prototypes on real-world data from smart e-textile gloves. Results demonstrated the usability and potential of proposed architecture for converting the real-world data into useful analytics while making use of knowledge-based models. In this way, the smart fog gateway enhances the end-to-end interaction between wearables (sensor devices) and the cloud.
Industrial Fog computing deploys various industrial services, such as automatic monitoring/control and imminent failure detection, at the Fog Nodes (FNs) to improve the performance of industrial systems. Much effort has been made in the literature on the design of fog network architecture and computation offloading. This paper studies an equally important but much less investigated problem of service hosting where FNs are adaptively configured to host services for Sensor Nodes (SNs), thereby enabling corresponding tasks to be executed by the FNs. The problem of service hosting emerges because of the limited computational and storage resources at FNs, which limit the number of different types of services that can be hosted by an FN at the same time. Considering the variability of service demand in both temporal and spatial dimensions, when, where, and which services to host have to be judiciously decided to maximize the utility of the Fog computing network. Our proposed Fog configuration strategies are tailored to battery-powered FNs. The limited battery capacity of FNs creates a long-term energy budget constraint that significantly complicates the Fog configuration problem as it introduces temporal coupling of decision making across the timeline. To address all these challenges, we propose an online distributed algorithm, called Adaptive Fog Configuration (AFC), based on Lyapunov optimization and parallel Gibbs sampling. AFC jointly optimizes service hosting and task admission decisions, requiring only currently available system information while guaranteeing close-to-optimal performance compared to an oracle algorithm with full future information.
In the era when the market segment of Internet of Things (IoT) tops the chart in various business reports, it is apparently envisioned that the field of medicine expects to gain a large benefit from the explosion of wearables and internet-connected s ensors that surround us to acquire and communicate unprecedented data on symptoms, medication, food intake, and daily-life activities impacting ones health and wellness. However, IoT-driven healthcare would have to overcome many barriers, such as: 1) There is an increasing demand for data storage on cloud servers where the analysis of the medical big data becomes increasingly complex, 2) The data, when communicated, are vulnerable to security and privacy issues, 3) The communication of the continuously collected data is not only costly but also energy hungry, 4) Operating and maintaining the sensors directly from the cloud servers are non-trial tasks. This book chapter defined Fog Computing in the context of medical IoT. Conceptually, Fog Computing is a service-oriented intermediate layer in IoT, providing the interfaces between the sensors and cloud servers for facilitating connectivity, data transfer, and queryable local database. The centerpiece of Fog computing is a low-power, intelligent, wireless, embedded computing node that carries out signal conditioning and data analytics on raw data collected from wearables or other medical sensors and offers efficient means to serve telehealth interventions. We implemented and tested an fog computing system using the Intel Edison and Raspberry Pi that allows acquisition, computing, storage and communication of the various medical data such as pathological speech data of individuals with speech disorders, Phonocardiogram (PCG) signal for heart rate estimation, and Electrocardiogram (ECG)-based Q, R, S detection.
With the advance in mobile computing, Internet of Things, and ubiquitous wireless connectivity, social sensing based edge computing (SSEC) has emerged as a new computation paradigm where people and their personally owned devices collect sensor measur ements from the physical world and process them at the edge of the network. This paper focuses on a privacy-aware task allocation problem where the goal is to optimize the computation task allocation in SSEC systems while respecting the users customized privacy settings. It introduces a novel Game-theoretic Privacy-aware Task Allocation (G-PATA) framework to achieve the goal. G-PATA includes (i) a bottom-up game-theoretic model to generate the maximum payoffs at end devices while satisfying the end users privacy settings; (ii) a top-down incentive scheme to adjust the rewards for the tasks to ensure that the task allocation decisions made by end devices meet the Quality of Service (QoS) requirements of the applications. Furthermore, the framework incorporates an efficient load balancing and iteration reduction component to adapt to the dynamic changes in status and privacy configurations of end devices. The G-PATA framework was implemented on a real-world edge computing platform that consists of heterogeneous end devices (Jetson TX1 and TK1 boards, and Raspberry Pi3). We compare G-PATA with state-of-the-art task allocation schemes through two real-world social sensing applications. The results show that G-PATA significantly outperforms existing approaches under various privacy settings (our scheme achieved as much as 47% improvements in delay reduction for the application and 15% more payoffs for end devices compared to the baselines.).
Internet of Things (IoT) devices and applications can have significant vulnerabilities, which may be exploited by adversaries to cause considerable harm. An important approach for mitigating this threat is remote attestation, which enables the defend er to remotely verify the integrity of devices and their software. There are a number of approaches for remote attestation, and each has its unique advantages and disadvantages in terms of detection accuracy and computational cost. Further, an attestation method may be applied in multiple ways, such as various levels of software coverage. Therefore, to minimize both security risks and computational overhead, defenders need to decide strategically which attestation methods to apply and how to apply them, depending on the characteristic of the devices and the potential losses. To answer these questions, we first develop a testbed for remote attestation of IoT devices, which enables us to measure the detection accuracy and performance overhead of various attestation methods. Our testbed integrates two example IoT applications, memory-checksum based attestation, and a variety of software vulnerabilities that allow adversaries to inject arbitrary code into running applications. Second, we model the problem of finding an optimal strategy for applying remote attestation as a Stackelberg security game between a defender and an adversary. We characterize the defenders optimal attestation strategy in a variety of special cases. Finally, building on experimental results from our testbed, we evaluate our model and show that optimal strategic attestation can lead to significantly lower losses than naive baseline strategies.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا