No Arabic abstract
Edge computing is an emerging solution to support the future Internet of Things (IoT) applications that are delay-sensitive, processing-intensive or that require closer intelligence. Machine intelligence and data-driven approaches are envisioned to build future Edge-IoT systems that satisfy IoT devices demands for edge resources. However, significant challenges and technical barriers exist which complicate the resource management for such Edge-IoT systems. IoT devices running various applications can demonstrate a wide range of behaviors in the devices resource demand that are extremely difficult to manage. In addition, the management of multidimensional resources fairly and efficiently by the edge in such a setting is a challenging task. In this paper, we develop a novel data-driven resource management framework named BEHAVE that intelligently and fairly allocates edge resources to heterogeneous IoT devices with consideration of their behavior of resource demand (BRD). BEHAVE aims to holistically address the management technical barriers by: 1) building an efficient scheme for modeling and assessment of the BRD of IoT devices based on their resource requests and resource usage; 2) expanding a new Rational, Fair, and Truthful Resource Allocation (RFTA) model that binds the devices BRD and resource allocation to achieve fair allocation and encourage truthfulness in resource demand; and 3) developing an enhanced deep reinforcement learning (EDRL) scheme to achieve the RFTA goals. The evaluation results demonstrate BEHAVEs capability to analyze the IoT devices BRD and adjust its resource management policy accordingly.
In Federated Learning (FL), a global statistical model is developed by encouraging mobile users to perform the model training on their local data and aggregating the output local model parameters in an iterative manner. However, due to limited energy and computation capability at the mobile devices, the performance of the model training is always at stake to meet the objective of local energy minimization. In this regard, Multi-access Edge Computing (MEC)-enabled FL addresses the tradeoff between the model performance and the energy consumption of the mobile devices by allowing users to offload a portion of their local dataset to an edge server for the model training. Since the edge server has high computation capability, the time consumption of the model training at the edge server is insignificant. However, the time consumption for dataset offloading from mobile users to the edge server has a significant impact on the total time consumption. Thus, resource management in MEC-enabled FL is challenging, where the objective is to reduce the total time consumption while saving the energy consumption of the mobile devices. In this paper, we formulate an energy-aware resource management for MEC-enabled FL in which the model training loss and the total time consumption are jointly minimized, while considering the energy limitation of mobile devices. In addition, we recast the formulated problem as a Generalized Nash Equilibrium Problem (GNEP) to capture the coupling constraints between the radio resource management and dataset offloading. We then analyze the impact of the dataset offloading and computing resource allocation on the model training loss, time, and the energy consumption.
Integrating Internet of Things (IoT) and edge computing for Edge-IoT systems, converged with machine intelligence, has the potentials of enabling a wide range of applications in smart homes, factories and cities. Edge-IoT can connect many diverse devices and the IoT asset owners can run heterogeneous IoT systems supported by various vendors or service providers (SPs), using either cloud or local edge computing (or both) for resource assistance. The existing methods typically manage the systems as separate vertical silos, or in a vendor/SP-centric way, which suffers from significant challenges. In this paper, we present a novel owner-centric management paradigm named ORCA to address the gaps left by the owner-centric paradigm and empower the IoT assets owners to effectively identify and mitigate potential issues in their own network premises, regardless the vendors/SPs situations. ORCA aims to be scalable and extensible in assisting IoT owners to perform intelligent management through a behavior-oriented and data-driven approach. ORCA is an ongoing project and the preliminary results indicate that it can significantly empower the IoT systems owners to better manage their IoT assets.
Internet of Things (IoT) is an Internet-based environment of connected devices and applications. IoT creates an environment where physical devices and sensors are flawlessly combined into information nodes to deliver innovative and smart services for human-being to make their life easier and more efficient. The main objective of the IoT devices-network is to generate data, which are converted into useful information by the data analysis process, it also provides useful resources to the end users. IoT resource management is a key challenge to ensure the quality of end user experience. Many IoT smart devices and technologies like sensors, actuators, RFID, UMTS, 3G, and GSM etc. are used to develop IoT networks. Cloud Computing plays an important role in these networks deployment by providing physical resources as virtualized resources consist of memory, computation power, network bandwidth, virtualized system and device drivers in secure and pay as per use basis. One of the major concerns of Cloud-based IoT environment is resource management, which ensures efficient resource utilization, load balancing, reduce SLA violation, and improve the system performance by reducing operational cost and energy consumption. Many researchers have been proposed IoT based resource management techniques. The focus of this paper is to investigate these proposed resource allocation techniques and finds which parameters must be considered for improvement in resource allocation for IoT networks. Further, this paper also uncovered challenges and issues of Cloud-based resource allocation for IoT environment.
Age of Information (AoI) has gained importance as a Key Performance Indicator (KPI) for characterizing the freshness of information in information-update systems and time-critical applications. Recent theoretical research on the topic has generated significant understanding of how various algorithms perform in terms of this metric on various system models and networking scenarios. In this paper, by the help of the theoretical results, we analyzed the AoI behavior on real-life networks, using our two test-beds, addressing IoT networks and regular computers. Excessive number of AoI measurements are provided for variations of transport protocols such as TCP, UDP and web-socket, on wired and wireless links. Practical issues such as synchronization and selection of hardware along with transport protocol, and their effects on AoI are discussed. The results provide insight toward application and transport layer mechanisms for optimizing AoI in real-life networks.
While mobile edge computing (MEC) alleviates the computation and power limitations of mobile devices, additional latency is incurred when offloading tasks to remote MEC servers. In this work, the power-delay tradeoff in the context of task offloading is studied in a multi-user MEC scenario. In contrast with current system designs relying on average metrics (e.g., the average queue length and average latency), a novel network design is proposed in which latency and reliability constraints are taken into account. This is done by imposing a probabilistic constraint on users task queue lengths and invoking results from extreme value theory to characterize the occurrence of low-probability events in terms of queue length (or queuing delay) violation. The problem is formulated as a computation and transmit power minimization subject to latency and reliability constraints, and solved using tools from Lyapunov stochastic optimization. Simulation results demonstrate the effectiveness of the proposed approach, while examining the power-delay tradeoff and required computational resources for various computation intensities.