ترغب بنشر مسار تعليمي؟ اضغط هنا

Crowdsensing Game with Demand Uncertainties: A Deep Reinforcement Learning Approach

105   0   0.0 ( 0 )
 نشر من قبل Jiang Zhang
 تاريخ النشر 2018
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Currently, explosive increase of smartphones with powerful built-in sensors such as GPS, accelerometers, gyroscopes and cameras has made the design of crowdsensing applications possible, which create a new interface between human beings and life environment. Until now, various mobile crowdsensing applications have been designed, where the crowdsourcers can employ mobile users (MUs) to complete the required sensing tasks. In this paper, emerging learning-based techniques are leveraged to address crowdsensing game with demand uncertainties and private information protection of MUs. Firstly, a novel economic model for mobile crowdsensing is designed, which takes MUs resources constraints and demand uncertainties into consideration. Secondly, an incentive mechanism based on Stackelberg game is provided, where the sensing-platform (SP) is the leader and the MUs are the followers. Then, the existence and uniqueness of the Stackelberg Equilibrium (SE) is proven and the procedure for computing the SE is given. Furthermore, a dynamic incentive mechanism (DIM) based on deep reinforcement learning (DRL) approach is investigated without knowing the private information of the MUs. It enables the SP to learn the optimal pricing strategy directly from game experience without any prior knowledge about MUs information. Finally, numerical simulations are implemented to evaluate the performance and theoretical properties of the proposed mechanism and approach.

قيم البحث

اقرأ أيضاً

Sparse Mobile CrowdSensing (MCS) is a novel MCS paradigm where data inference is incorporated into the MCS process for reducing sensing costs while its quality is guaranteed. Since the sensed data from different cells (sub-areas) of the target sensin g area will probably lead to diverse levels of inference data quality, cell selection (i.e., choose which cells of the target area to collect sensed data from participants) is a critical issue that will impact the total amount of data that requires to be collected (i.e., data collection costs) for ensuring a certain level of quality. To address this issue, this paper proposes a Deep Reinforcement learning based Cell selection mechanism for Sparse MCS, called DR-Cell. First, we properly model the key concepts in reinforcement learning including state, action, and reward, and then propose to use a deep recurrent Q-network for learning the Q-function that can help decide which cell is a better choice under a certain state during cell selection. Furthermore, we leverage the transfer learning techniques to reduce the amount of data required for training the Q-function if there are multiple correlated MCS tasks that need to be conducted in the same target area. Experiments on various real-life sensing datasets verify the effectiveness of DR-Cell over the state-of-the-art cell selection mechanisms in Sparse MCS by reducing up to 15% of sensed cells with the same data inference quality guarantee.
To achieve general intelligence, agents must learn how to interact with others in a shared environment: this is the challenge of multiagent reinforcement learning (MARL). The simplest form is independent reinforcement learning (InRL), where each agen t treats its experience as part of its (non-stationary) environment. In this paper, we first observe that policies learned using InRL can overfit to the other agents policies during training, failing to sufficiently generalize during execution. We introduce a new metric, joint-policy correlation, to quantify this effect. We describe an algorithm for general MARL, based on approximate best responses to mixtures of policies generated using deep reinforcement learning, and empirical game-theoretic analysis to compute meta-strategies for policy selection. The algorithm generalizes previous ones such as InRL, iterated best response, double oracle, and fictitious play. Then, we present a scalable implementation which reduces the memory requirement using decoupled meta-solvers. Finally, we demonstrate the generality of the resulting policies in two partially observable settings: gridworld coordination games and poker.
128 - Yufei Ye , Xiaoqin Ren , Jin Wang 2018
With the rapid development of deep learning, deep reinforcement learning (DRL) began to appear in the field of resource scheduling in recent years. Based on the previous research on DRL in the literature, we introduce online resource scheduling algor ithm DeepRM2 and the offline resource scheduling algorithm DeepRM_Off. Compared with the state-of-the-art DRL algorithm DeepRM and heuristic algorithms, our proposed algorithms have faster convergence speed and better scheduling efficiency with regarding to average slowdown time, job completion time and rewards.
Mobile crowdsensing has shown a great potential to address large-scale data sensing problems by allocating sensing tasks to pervasive mobile users. The mobile users will participate in a crowdsensing platform if they can receive satisfactory reward. In this paper, to effectively and efficiently recruit sufficient number of mobile users, i.e., participants, we investigate an optimal incentive mechanism of a crowdsensing service provider. We apply a two-stage Stackelberg game to analyze the participation level of the mobile users and the optimal incentive mechanism of the crowdsensing service provider using backward induction. In order to motivate the participants, the incentive is designed by taking into account the social network effects from the underlying mobile social domain. For example, in a crowdsensing-based road traffic information sharing application, a user can get a better and accurate traffic report if more users join and share their road information. We derive the analytical expressions for the discriminatory incentive as well as the uniform incentive mechanisms. To fit into practical scenarios, we further formulate a Bayesian Stackelberg game with incomplete information to analyze the interaction between the crowdsensing service provider and mobile users, where the social structure information (the social network effects) is uncertain. The existence and uniqueness of the Bayesian Stackelberg equilibrium are validated by identifying the best response strategies of the mobile users. Numerical results corroborate the fact that the network effects tremendously stimulate higher mobile participation level and greater revenue of the crowdsensing service provider. In addition, the social structure information helps the crowdsensing service provider to achieve greater revenue gain.
We present a novel negotiation model that allows an agent to learn how to negotiate during concurrent bilateral negotiations in unknown and dynamic e-markets. The agent uses an actor-critic architecture with model-free reinforcement learning to learn a strategy expressed as a deep neural network. We pre-train the strategy by supervision from synthetic market data, thereby decreasing the exploration time required for learning during negotiation. As a result, we can build automated agents for concurrent negotiations that can adapt to different e-market settings without the need to be pre-programmed. Our experimental evaluation shows that our deep reinforcement learning-based agents outperform two existing well-known negotiation strategies in one-to-many concurrent bilateral negotiations for a range of e-market settings.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا