No Arabic abstract
With the advance in mobile computing, Internet of Things, and ubiquitous wireless connectivity, social sensing based edge computing (SSEC) has emerged as a new computation paradigm where people and their personally owned devices collect sensor measurements from the physical world and process them at the edge of the network. This paper focuses on a privacy-aware task allocation problem where the goal is to optimize the computation task allocation in SSEC systems while respecting the users customized privacy settings. It introduces a novel Game-theoretic Privacy-aware Task Allocation (G-PATA) framework to achieve the goal. G-PATA includes (i) a bottom-up game-theoretic model to generate the maximum payoffs at end devices while satisfying the end users privacy settings; (ii) a top-down incentive scheme to adjust the rewards for the tasks to ensure that the task allocation decisions made by end devices meet the Quality of Service (QoS) requirements of the applications. Furthermore, the framework incorporates an efficient load balancing and iteration reduction component to adapt to the dynamic changes in status and privacy configurations of end devices. The G-PATA framework was implemented on a real-world edge computing platform that consists of heterogeneous end devices (Jetson TX1 and TK1 boards, and Raspberry Pi3). We compare G-PATA with state-of-the-art task allocation schemes through two real-world social sensing applications. The results show that G-PATA significantly outperforms existing approaches under various privacy settings (our scheme achieved as much as 47% improvements in delay reduction for the application and 15% more payoffs for end devices compared to the baselines.).
In recent years, the blockchain-based Internet of Things (IoT) has been researched and applied widely, where each IoT device can act as a node in the blockchain. However, these lightweight nodes usually do not have enough computing power to complete the consensus or other computing-required tasks. Edge computing network gives a platform to provide computing power to IoT devices. A fundamental problem is how to allocate limited edge servers to IoT devices in a highly untrustworthy environment. In a fair competition environment, the allocation mechanism should be online, truthful, and privacy safe. To address these three challenges, we propose an online multi-item double auction (MIDA) mechanism, where IoT devices are buyers and edge servers are sellers. In order to achieve the truthfulness, the participants private information is at risk of being exposed by inference attack, which may lead to malicious manipulation of the market by adversaries. Then, we improve our MIDA mechanism based on differential privacy to protect sensitive information from being leaked. It interferes with the auction results slightly but guarantees privacy protection with high confidence. Besides, we upgrade our privacy-preserving MIDA mechanism such that adapting to more complex and realistic scenarios. In the end, the effectiveness and correctness of algorithms are evaluated and verified by theoretical analysis and numerical simulations.
Cloud computing is a newly emerging distributed computing which is evolved from Grid computing. Task scheduling is the core research of cloud computing which studies how to allocate the tasks among the physical nodes so that the tasks can get a balanced allocation or each tasks execution cost decreases to the minimum or the overall system performance is optimal. Unlike the previous task slices sequential execution of an independent task in the model of which the target is processing time, we build a model that targets at the response time, in which the task slices are executed in parallel. Then we give its solution with a method based on an improved adjusting entropy function. At last, we design a new task scheduling algorithm. Experimental results show that the response time of our proposed algorithm is much lower than the game-theoretic algorithm and balanced scheduling algorithm and compared with the balanced scheduling algorithm, game-theoretic algorithm is not necessarily superior in parallel although its objective function value is better.
Neural networks (NNs) lack measures of reliability estimation that would enable reasoning over their predictions. Despite the vital importance, especially in areas of human well-being and health, state-of-the-art uncertainty estimation techniques are computationally expensive when applied to resource-constrained devices. We propose an efficient framework for predictive uncertainty estimation in NNs deployed on embedded edge systems with no need for fine-tuning or re-training strategies. To meet the energy and latency requirements of these embedded platforms the framework is built from the ground up to provide predictive uncertainty based only on one forward pass and a negligible amount of additional matrix multiplications with theoretically proven correctness. Our aim is to enable already trained deep learning models to generate uncertainty estimates on resource-limited devices at inference time focusing on classification tasks. This framework is founded on theoretical developments casting dropout training as approximate inference in Bayesian NNs. Our layerwise distribution approximation to the convolution layer cascades through the network, providing uncertainty estimates in one single run which ensures minimal overhead, especially compared with uncertainty techniques that require multiple forwards passes and an equal linear rise in energy and latency requirements making them unsuitable in practice. We demonstrate that it yields better performance and flexibility over previous work based on multilayer perceptrons to obtain uncertainty estimates. Our evaluation with mobile applications datasets shows that our approach not only obtains robust and accurate uncertainty estimations but also outperforms state-of-the-art methods in terms of systems performance, reducing energy consumption (up to 28x), keeping the memory overhead at a minimum while still improving accuracy (up to 16%).
While mobile edge computing (MEC) alleviates the computation and power limitations of mobile devices, additional latency is incurred when offloading tasks to remote MEC servers. In this work, the power-delay tradeoff in the context of task offloading is studied in a multi-user MEC scenario. In contrast with current system designs relying on average metrics (e.g., the average queue length and average latency), a novel network design is proposed in which latency and reliability constraints are taken into account. This is done by imposing a probabilistic constraint on users task queue lengths and invoking results from extreme value theory to characterize the occurrence of low-probability events in terms of queue length (or queuing delay) violation. The problem is formulated as a computation and transmit power minimization subject to latency and reliability constraints, and solved using tools from Lyapunov stochastic optimization. Simulation results demonstrate the effectiveness of the proposed approach, while examining the power-delay tradeoff and required computational resources for various computation intensities.
This is the first paper to address the topology structure of Job Edge-Fog interconnection network in the perspective of network creation game. A two level network creation game model is given, in which the first level is similar to the traditional network creation game with total length objective to other nodes. The second level adopts two types of cost functions, one is created based on the Jackson-Wolinsky type of distance based utility, another is created based on the Network-Only Cost in the IoT literature. We show the performance of this two level game (Price of Anarchy). This work discloses how the selfish strategies of each individual device can influence the global topology structure of the job edge-fog interconnection network and provides theoretical foundations of the IoT infrastructure construction. A significant advantage of this framework is that it can avoid solving the traditional expensive and impractical quadratic assignment problem, which was the typical framework to study this task. Furthermore, it can control the systematic performance based only on one or two cost parameters of the job edge-fog networks, independently and in a distributed way.