ﻻ يوجد ملخص باللغة العربية
Currently, explosive increase of smartphones with powerful built-in sensors such as GPS, accelerometers, gyroscopes and cameras has made the design of crowdsensing applications possible, which create a new interface between human beings and life environment. Until now, various mobile crowdsensing applications have been designed, where the crowdsourcers can employ mobile users (MUs) to complete the required sensing tasks. In this paper, emerging learning-based techniques are leveraged to address crowdsensing game with demand uncertainties and private information protection of MUs. Firstly, a novel economic model for mobile crowdsensing is designed, which takes MUs resources constraints and demand uncertainties into consideration. Secondly, an incentive mechanism based on Stackelberg game is provided, where the sensing-platform (SP) is the leader and the MUs are the followers. Then, the existence and uniqueness of the Stackelberg Equilibrium (SE) is proven and the procedure for computing the SE is given. Furthermore, a dynamic incentive mechanism (DIM) based on deep reinforcement learning (DRL) approach is investigated without knowing the private information of the MUs. It enables the SP to learn the optimal pricing strategy directly from game experience without any prior knowledge about MUs information. Finally, numerical simulations are implemented to evaluate the performance and theoretical properties of the proposed mechanism and approach.
Sparse Mobile CrowdSensing (MCS) is a novel MCS paradigm where data inference is incorporated into the MCS process for reducing sensing costs while its quality is guaranteed. Since the sensed data from different cells (sub-areas) of the target sensin
To achieve general intelligence, agents must learn how to interact with others in a shared environment: this is the challenge of multiagent reinforcement learning (MARL). The simplest form is independent reinforcement learning (InRL), where each agen
With the rapid development of deep learning, deep reinforcement learning (DRL) began to appear in the field of resource scheduling in recent years. Based on the previous research on DRL in the literature, we introduce online resource scheduling algor
Mobile crowdsensing has shown a great potential to address large-scale data sensing problems by allocating sensing tasks to pervasive mobile users. The mobile users will participate in a crowdsensing platform if they can receive satisfactory reward.
We present a novel negotiation model that allows an agent to learn how to negotiate during concurrent bilateral negotiations in unknown and dynamic e-markets. The agent uses an actor-critic architecture with model-free reinforcement learning to learn