No Arabic abstract
Complex networks tend to display communities which are groups of nodes cohesively connected among themselves in one group and sparsely connected to the remainder of the network. Detecting such communities is an important computational problem, since it provides an insight into the functionality of networks. Further, investigating community structure in a dynamic network, where the network is subject to change, is even more challenging. This paper presents a game-theoretical technique for detecting community structures in dynamic as well as static complex networks. In our method, each node takes the role of a player that attempts to gain a higher payoff by joining one or more communities or switching between them. The goal of the game is to reveal community structure formed by these players by finding a Nash-equilibrium point among them. To the best of our knowledge, this is the first game-theoretic algorithm which is able to extract overlapping communities from either static or dynamic networks. We present the experimental results illustrating the effectiveness of the proposed method on both synthetic and real-world networks.
We present the design and analysis of a multi-level game-theoretic model of hierarchical policy-making, inspired by policy responses to the COVID-19 pandemic. Our model captures the potentially mismatched priorities among a hierarchy of policy-makers (e.g., federal, state, and local governments) with respect to two main cost components that have opposite dependence on the policy strength, such as post-intervention infection rates and the cost of policy implementation. Our model further includes a crucial third factor in decisions: a cost of non-compliance with the policy-maker immediately above in the hierarchy, such as non-compliance of state with federal policies. Our first contribution is a closed-form approximation of a recently published agent-based model to compute the number of infections for any implemented policy. Second, we present a novel equilibrium selection criterion that addresses common issues with equilibrium multiplicity in our setting. Third, we propose a hierarchical algorithm based on best response dynamics for computing an approximate equilibrium of the hierarchical policy-making game consistent with our solution concept. Finally, we present an empirical investigation of equilibrium policy strategies in this game in terms of the extent of free riding as well as fairness in the distribution of costs depending on game parameters such as the degree of centralization and disagreements about policy priorities among the agents.
Advanced Persistent Threats (APTs) infiltrate cyber systems and compromise specifically targeted data and/or resources through a sequence of stealthy attacks consisting of multiple stages. Dynamic information flow tracking has been proposed to detect APTs. In this paper, we develop a dynamic information flow tracking game for resource-efficient detection of APTs via multi-stage dynamic games. The game evolves on an information flow graph, whose nodes are processes and objects (e.g. file, network endpoints) in the system and the edges capture the interaction between different processes and objects. Each stage of the game has pre-specified targets which are characterized by a set of nodes of the graph and the goal of the APT is to evade detection and reach a target node of that stage. The goal of the defender is to maximize the detection probability while minimizing performance overhead on the system. The resource costs of the players are different and the information structure is asymmetric resulting in a nonzero-sum imperfect information game. We first calculate the best responses of the players and characterize the set of Nash equilibria for single stage attacks. Subsequently, we provide a polynomial-time algorithm to compute a correlated equilibrium for the multi-stage attack case. Finally, we experiment our model and algorithms on real-world nation state attack data obtained from Refinable Attack Investigation system.
Advanced Persistent Threats (APTs) are stealthy customized attacks by intelligent adversaries. This paper deals with the detection of APTs that infiltrate cyber systems and compromise specifically targeted data and/or infrastructures. Dynamic information flow tracking is an information trace-based detection mechanism against APTs that taints suspicious information flows in the system and generates security analysis for unauthorized use of tainted data. In this paper, we develop an analytical model for resource-efficient detection of APTs using an information flow tracking game. The game is a nonzero-sum, turn-based, stochastic game with asymmetric information as the defender cannot distinguish whether an incoming flow is malicious or benign and hence has only partial state observation. We analyze equilibrium of the game and prove that a Nash equilibrium is given by a solution to the minimum capacity cut set problem on a flow-network derived from the system, where the edge capacities are obtained from the cost of performing security analysis. Finally, we implement our algorithm on the real-world dataset for a data exfiltration attack augmented with false-negative and false-positive rates and compute an optimal defender strategy.
Federated learning is a distributed learning paradigm where multiple agents, each only with access to local data, jointly learn a global model. There has recently been an explosion of research aiming not only to improve the accuracy rates of federated learning, but also provide certain guarantees around social good properties such as total error. One branch of this research has taken a game-theoretic approach, and in particular, prior work has viewed federated learning as a hedonic game, where error-minimizing players arrange themselves into federating coalitions. This past work proves the existence of stable coalition partitions, but leaves open a wide range of questions, including how far from optimal these stable solutions are. In this work, we motivate and define a notion of optimality given by the average error rates among federating agents (players). First, we provide and prove the correctness of an efficient algorithm to calculate an optimal (error minimizing) arrangement of players. Next, we analyze the relationship between the stability and optimality of an arrangement. First, we show that for some regions of parameter space, all stable arrangements are optimal (Price of Anarchy equal to 1). However, we show this is not true for all settings: there exist examples of stable arrangements with higher cost than optimal (Price of Anarchy greater than 1). Finally, we give the first constant-factor bound on the performance gap between stability and optimality, proving that the total error of the worst stable solution can be no higher than 9 times the total error of an optimal solution (Price of Anarchy bound of 9).
Coded distributed computing (CDC) has emerged as a promising approach because it enables computation tasks to be carried out in a distributed manner while mitigating straggler effects, which often account for the long overall completion times. Specifically, by using polynomial codes, computed results from only a subset of edge servers can be used to reconstruct the final result. However, incentive issues have not been studied systematically for the edge servers to complete the CDC tasks. In this paper, we propose a tractable two-level game-theoretic approach to incentivize the edge servers to complete the CDC tasks. Specifically, in the lower level, a hedonic coalition formation game is formulated where the edge servers share their resources within their coalitions. By forming coalitions, the edge servers have more Central Processing Unit (CPU) power to complete the computation tasks. In the upper level, given the CPU power of the coalitions of edge servers, an all-pay auction is designed to incentivize the edge servers to participate in the CDC tasks. In the all-pay auction, the bids of the edge servers are represented by the allocation of their CPU power to the CDC tasks. The all-pay auction is designed to maximize the utility of the cloud server by determining the allocation of rewards to the winners. Simulation results show that the edge servers are incentivized to allocate more CPU power when multiple rewards are offered, i.e., there are multiple winners, instead of rewarding only the edge server with the largest CPU power allocation. Besides, the utility of the cloud server is maximized when it offers multiple homogeneous rewards, instead of heterogeneous rewards.