ترغب بنشر مسار تعليمي؟ اضغط هنا

A Mean Field Games Model for Cryptocurrency Mining

371   0   0.0 ( 0 )
 نشر من قبل A. Max Reppen
 تاريخ النشر 2019
  مجال البحث مالية
والبحث باللغة English




اسأل ChatGPT حول البحث

We propose a mean field game model to study the question of how centralization of reward and computational power occur in the Bitcoin-like cryptocurrencies. Miners compete against each other for mining rewards by increasing their computational power. This leads to a novel mean field game of jump intensity control, which we solve explicitly for miners maximizing exponential utility, and handle numerically in the case of miners with power utilities. We show that the heterogeneity of their initial wealth distribution leads to greater imbalance of the reward distribution, or a rich get richer effect. This concentration phenomenon is aggravated by a higher bitcoin price, and reduced by competition. Additionally, an advanced miner with cost advantages such as access to cheaper electricity, contributes a significant amount of computational power in equilibrium. Hence, cost efficiency can also result in the type of centralization seen among miners of cryptocurrencies.



قيم البحث

اقرأ أيضاً

In the context of simple finite-state discrete time systems, we introduce a generalization of mean field game solution, called correlated solution, which can be seen as the mean field game analogue of a correlated equilibrium. Our notion of solution is justified in two ways: We prove that correlated solutions arise as limits of exchangeable correlated equilibria in restricted (Markov open-loop) strategies for the underlying $N$-player games, and we show how to construct approximate $N$-player correlated equilibria starting from a correlated solution to the mean field game.
Mean field games are concerned with the limit of large-population stochastic differential games where the agents interact through their empirical distribution. In the classical setting, the number of players is large but fixed throughout the game. Ho wever, in various applications, such as population dynamics or economic growth, the number of players can vary across time which may lead to different Nash equilibria. For this reason, we introduce a branching mechanism in the population of agents and obtain a variation on the mean field game problem. As a first step, we study a simple model using a PDE approach to illustrate the main differences with the classical setting. We prove existence of a solution and show that it provides an approximate Nash-equilibrium for large population games. We also present a numerical example for a linear--quadratic model. Then we study the problem in a general setting by a probabilistic approach. It is based upon the relaxed formulation of stochastic control problems which allows us to obtain a general existence result.
We propose and investigate a general class of discrete time and finite state space mean field game (MFG) problems with potential structure. Our model incorporates interactions through a congestion term and a price variable. It also allows hard constr aints on the distribution of the agents. We analyze the connection between the MFG problem and two optimal control problems in duality. We present two families of numerical methods and detail their implementation: (i) primal-dual proximal methods (and their extension with nonlinear proximity operators), (ii) the alternating direction method of multipliers (ADMM) and a variant called ADM-G. We give some convergence results. Numerical results are provided for two examples with hard constraints.
Entropy regularization has been extensively adopted to improve the efficiency, the stability, and the convergence of algorithms in reinforcement learning. This paper analyzes both quantitatively and qualitatively the impact of entropy regularization for Mean Field Game (MFG) with learning in a finite time horizon. Our study provides a theoretical justification that entropy regularization yields time-dependent policies and, furthermore, helps stabilizing and accelerating convergence to the game equilibrium. In addition, this study leads to a policy-gradient algorithm for exploration in MFG. Under this algorithm, agents are able to learn the optimal exploration scheduling, with stable and fast convergence to the game equilibrium.
We study the asymptotic organization among many optimizing individuals interacting in a suitable moderate way. We justify this limiting game by proving that its solution provides approximate Nash equilibria for large but finite player games. This pro of depends upon the derivation of a law of large numbers for the empirical processes in the limit as the number of players tends to infinity. Because it is of independent interest, we prove this result in full detail. We characterize the solutions of the limiting game via a verification argument.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا