Do you want to publish a course? Click here

Incentive Mechanism Design for Resource Sharing in Collaborative Edge Learning

342   0   0.0 ( 0 )
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

In 5G and Beyond networks, Artificial Intelligence applications are expected to be increasingly ubiquitous. This necessitates a paradigm shift from the current cloud-centric model training approach to the Edge Computing based collaborative learning scheme known as edge learning, in which model training is executed at the edge of the network. In this article, we first introduce the principles and technologies of collaborative edge learning. Then, we establish that a successful, scalable implementation of edge learning requires the communication, caching, computation, and learning resources (3C-L) of end devices and edge servers to be leveraged jointly in an efficient manner. However, users may not consent to contribute their resources without receiving adequate compensation. In consideration of the heterogeneity of edge nodes, e.g., in terms of available computation resources, we discuss the challenges of incentive mechanism design to facilitate resource sharing for edge learning. Furthermore, we present a case study involving optimal auction design using Deep Learning to price fresh data contributed for edge learning. The performance evaluation shows the revenue maximizing properties of our proposed auction over the benchmark schemes.



rate research

Read More

Recent years have witnessed a rapid proliferation of smart Internet of Things (IoT) devices. IoT devices with intelligence require the use of effective machine learning paradigms. Federated learning can be a promising solution for enabling IoT-based smart applications. In this paper, we present the primary design aspects for enabling federated learning at network edge. We model the incentive-based interaction between a global server and participating devices for federated learning via a Stackelberg game to motivate the participation of the devices in the federated learning process. We present several open research challenges with their possible solutions. Finally, we provide an outlook on future research.
A distributed machine learning platform needs to recruit many heterogeneous worker nodes to finish computation simultaneously. As a result, the overall performance may be degraded due to straggling workers. By introducing redundancy into computation, coded machine learning can effectively improve the runtime performance by recovering the final computation result through the first $k$ (out of the total $n$) workers who finish computation. While existing studies focus on designing efficient coding schemes, the issue of designing proper incentives to encourage worker participation is still under-explored. This paper studies the platforms optimal incentive mechanism for motivating proper workers participation in coded machine learning, despite the incomplete information about heterogeneous workers computation performances and costs. A key contribution of this work is to summarize workers multi-dimensional heterogeneity as a one-dimensional metric, which guides the platforms efficient selection of workers under incomplete information with a linear computation complexity. Moreover, we prove that the optimal recovery threshold $k$ is linearly proportional to the participator number $n$ if we use the widely adopted MDS (Maximum Distance Separable) codes for data encoding. We also show that the platforms increased cost due to incomplete information disappears when worker number is sufficiently large, but it does not monotonically decrease in worker number.
Distributed machine learning (ML) at network edge is a promising paradigm that can preserve both network bandwidth and privacy of data providers. However, heterogeneous and limited computation and communication resources on edge servers (or edges) pose great challenges on distributed ML and formulate a new paradigm of Edge Learning (i.e. edge-cloud collaborative machine learning). In this article, we propose a novel framework of learning to learn for effective Edge Learning (EL) on heterogeneous edges with resource constraints. We first model the dynamic determination of collaboration strategy (i.e. the allocation of local iterations at edge servers and global aggregations on the Cloud during collaborative learning process) as an online optimization problem to achieve the tradeoff between the performance of EL and the resource consumption of edge servers. Then, we propose an Online Learning for EL (OL4EL) framework based on the budget-limited multi-armed bandit model. OL4EL supports both synchronous and asynchronous learning patterns, and can be used for both supervised and unsupervised learning tasks. To evaluate the performance of OL4EL, we conducted both real-world testbed experiments and extensive simulations based on docker containers, where both Support Vector Machine and K-means were considered as use cases. Experimental results demonstrate that OL4EL significantly outperforms state-of-the-art EL and other collaborative ML approaches in terms of the trade-off between learning performance and resource consumption.
Federated learning (FL) serves as a data privacy-preserved machine learning paradigm, and realizes the collaborative model trained by distributed clients. To accomplish an FL task, the task publisher needs to pay financial incentives to the FL server and FL server offloads the task to the contributing FL clients. It is challenging to design proper incentives for the FL clients due to the fact that the task is privately trained by the clients. This paper aims to propose a contract theory based FL task training model towards minimizing incentive budget subject to clients being individually rational (IR) and incentive compatible (IC) in each FL training round. We design a two-dimensional contract model by formally defining two private types of clients, namely data quality and computation effort. To effectively aggregate the trained models, a contract-based aggregator is proposed. We analyze the feasible and optimal contract solutions to the proposed contract model. %Experimental results demonstrate that the proposed framework and contract model can effective improve the generation accuracy of FL tasks. Experimental results show that the generalization accuracy of the FL tasks can be improved by the proposed incentive mechanism where contract-based aggregation is applied.
60 - Yana Qin , Danye Wu , Zhiwei Xu 2020
To enhance the quality and speed of data processing and protect the privacy and security of the data, edge computing has been extensively applied to support data-intensive intelligent processing services at edge. Among these data-intensive services, ensemble learning-based services can in natural leverage the distributed computation and storage resources at edge devices to achieve efficient data collection, processing, analysis. Collaborative caching has been applied in edge computing to support services close to the data source, in order to take the limited resources at edge devices to support high-performance ensemble learning solutions. To achieve this goal, we propose an adaptive in-network collaborative caching scheme for ensemble learning at edge. First, an efficient data representation structure is proposed to record cached data among different nodes. In addition, we design a collaboration scheme to facilitate edge nodes to cache valuable data for local ensemble learning, by scheduling local caching according to a summarization of data representations from different edge nodes. Our extensive simulations demonstrate the high performance of the proposed collaborative caching scheme, which significantly reduces the learning latency and the transmission overhead.

suggested questions

comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا