ترغب بنشر مسار تعليمي؟ اضغط هنا

Computational Resource Allocation for Edge Computing in Social Internet-of-Things

86   0   0.0 ( 0 )
 نشر من قبل Abdullah Khanfor
 تاريخ النشر 2020
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English
 تأليف Abdullah Khanfor




اسأل ChatGPT حول البحث

The heterogeneity of the Internet-of-things (IoT) network can be exploited as a dynamic computational resource environment for many devices lacking computational capabilities. A smart mechanism for allocating edge and mobile computers to match the need of devices requesting external computational resources is developed. In this paper, we employ the concept of Social IoT and machine learning to downgrade the complexity of allocating appropriate edge computers. We propose a framework that detects different communities of devices in SIoT enclosing trustworthy peers having strong social relations. Afterwards, we train a machine learning algorithm, considering multiple computational and non-computational features of the requester as well as the edge computers, to predict the total time needed to process the required task by the potential candidates belonging to the same community of the requester. By applying it to a real-world data set, we observe that the proposed framework provides encouraging results for mobile computer allocation.



قيم البحث

اقرأ أيضاً

Social Internet of Things are changing what social patterns can be, and will bring unprecedented online and offline social experiences. Social cloud is an improvement over social network in order to cooperatively provide computing facilities through social interactions. Both of these two field needs more research efforts to have a generic or unified supporting architecture, in order to integrate with various involved technologies. These two paradigms are both related to Social Networks, Cloud Computing, and Internet of Things. Therefore, we have reasons to believe that they have many potentials to support each other, and we predict that the two will be merged in one way or another.
108 - Abdullah Khanfor 2020
In this paper, we propose a machine learning process for clustering large-scale social Internet-of-things (SIoT) devices into several groups of related devices sharing strong relations. To this end, we generate undirected weighted graphs based on the historical dataset of IoT devices and their social relations. Using the adjacency matrices of these graphs and the IoT devices features, we embed the graphs nodes using a Graph Neural Network (GNN) to obtain numerical vector representations of the IoT devices. The vector representation does not only reflect the characteristics of the device but also its relations with its peers. The obtained node embeddings are then fed to a conventional unsupervised learning algorithm to determine the clusters accordingly. We showcase the obtained IoT groups using two well-known clustering algorithms, specifically the K-means and the density-based algorithm for discovering clusters (DBSCAN). Finally, we compare the performances of the proposed GNN-based clustering approach in terms of coverage and modularity to those of the deterministic Louvain community detection algorithm applied solely on the graphs created from the different relations. It is shown that the framework achieves promising preliminary results in clustering large-scale IoT systems.
In mobile edge computing (MEC), one of the important challenges is how much resources of which mobile edge server (MES) should be allocated to which user equipment (UE). The existing resource allocation schemes only consider CPU as the requested reso urce and assume utility for MESs as either a random variable or dependent on the requested CPU only. This paper presents a novel comprehensive utility function for resource allocation in MEC. The utility function considers the heterogeneous nature of applications that a UE offloads to MES. The proposed utility function considers all important parameters, including CPU, RAM, hard disk space, required time, and distance, to calculate a more realistic utility value for MESs. Moreover, we improve upon some general algorithms, used for resource allocation in MEC and cloud computing, by considering our proposed utility function. We name the improv
In this article, we consider the problem of relay assisted computation offloading (RACO), in which user A aims to share the results of computational tasks with another user B through wireless exchange over a relay platform equipped with mobile edge c omputing capabilities, referred to as a mobile edge relay server (MERS). To support the computation offloading, we propose a hybrid relaying (HR) approach employing two orthogonal frequency bands, where the amplify-and-forward scheme is used in one band to exchange computational results, while the decode-and-forward scheme is used in the other band to transfer the unprocessed tasks. The motivation behind the proposed HR scheme for RACO is to adapt the allocation of computing and communication resources both to dynamic user requirements and to diverse computational tasks. Within this framework, we seek to minimize the weighted sum of the execution delay and the energy consumption in the RACO system by jointly optimizing the computation offloading ratio, the bandwidth allocation, the processor speeds, as well as the transmit power levels of both user $A$ and the MERS, under practical constraints on the available computing and communication resources. The resultant problem is formulated as a non-differentiable and nonconvex optimization program with highly coupled constraints. By adopting a series of transformations and introducing auxiliary variables, we first convert this problem into a more tractable yet equivalent form. We then develop an efficient iterative algorithm for its solution based on the concave-convex procedure. By exploiting the special structure of this problem, we also propose a simplified algorithm based on the inexact block coordinate descent method, with reduced computational complexity. Finally, we present numerical results that illustrate the advantages of the proposed algorithms over state-of-the-art benchmark schemes.
The advances in deep neural networks (DNN) have significantly enhanced real-time detection of anomalous data in IoT applications. However, the complexity-accuracy-delay dilemma persists: complex DNN models offer higher accuracy, but typical IoT devic es can barely afford the computation load, and the remedy of offloading the load to the cloud incurs long delay. In this paper, we address this challenge by proposing an adaptive anomaly detection scheme with hierarchical edge computing (HEC). Specifically, we first construct multiple anomaly detection DNN models with increasing complexity, and associate each of them to a corresponding HEC layer. Then, we design an adaptive model selection scheme that is formulated as a contextual-bandit problem and solved by using a reinforcement learning policy network. We also incorporate a parallelism policy training method to accelerate the training process by taking advantage of distributed models. We build an HEC testbed using real IoT devices, implement and evaluate our contextual-bandit approach with both univariate and multivariate IoT datasets. In comparison with both baseline and state-of-the-art schemes, our adaptive approach strikes the best accuracy-delay tradeoff on the univariate dataset, and achieves the best accuracy and F1-score on the multivariate dataset with only negligibly longer delay than the best (but inflexible) scheme.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا