Do you want to publish a course? Click here

Green Security Game with Community Engagement

243   0   0.0 ( 0 )
 Added by Taoan Huang
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

While game-theoretic models and algorithms have been developed to combat illegal activities, such as poaching and over-fishing, in green security domains, none of the existing work considers the crucial aspect of community engagement: community members are recruited by law enforcement as informants and can provide valuable tips, e.g., the location of ongoing illegal activities, to assist patrols. We fill this gap and (i) introduce a novel two-stage security game model for community engagement, with a bipartite graph representing the informant-attacker social network and a level-$kappa$ response model for attackers inspired by cognitive hierarchy; (ii) provide complexity results and exact, approximate, and heuristic algorithms for selecting informants and allocating patrollers against level-$kappa$ ($kappa<infty$) attackers; (iii) provide a novel algorithm to find the optimal defender strategy against level-$infty$ attackers, which converts the problem of optimizing a parameterized fixed-point to a bi-level optimization problem, where the inner level is just a linear program, and the outer level has only a linear number of variables and a single linear constraint. We also evaluate the algorithms through extensive experiments.



rate research

Read More

80 - Zhaoyang Cheng , Guanpu Chen , 2021
In this paper, we employ a hypergame framework to analyze the single-leader-multiple-followers (SLMF) Stackelberg security game with two typical misinformed situations: misperception and deception. We provide a stability criterion with the help of hyper Nash equilibrium (HNE) to analyze both strategic stability and cognitive stability of equilibria in SLMF games with misinformation. To this end, we find mild stable conditions such that the equilibria with misperception and deception can derive HNE. Moreover, we analyze the robustness of the equilibria to reveal whether the players have the ability to keep their profits.
Stackelberg security games are a critical tool for maximizing the utility of limited defense resources to protect important targets from an intelligent adversary. Motivated by green security, where the defender may only observe an adversarys response to defense on a limited set of targets, we study the problem of learning a defense that generalizes well to a new set of targets with novel feature values and combinations. Traditionally, this problem has been addressed via a two-stage approach where an adversary model is trained to maximize predictive accuracy without considering the defenders optimization problem. We develop an end-to-end game-focused approach, where the adversary model is trained to maximize a surrogate for the defenders expected utility. We show both in theory and experimental results that our game-focused approach achieves higher defender expected utility than the two-stage alternative when there is limited data.
A traditional assumption in game theory is that players are opaque to one another -- if a player changes strategies, then this change in strategies does not affect the choice of other players strategies. In many situations this is an unrealistic assumption. We develop a framework for reasoning about games where the players may be translucent to one another; in particular, a player may believe that if she were to change strategies, then the other player would also change strategies. Translucent players may achieve significantly more efficient outcomes than opaque ones. Our main result is a characterization of strategies consistent with appropriate analogues of common belief of rationality. Common Counterfactual Belief of Rationality (CCBR) holds if (1) everyone is rational, (2) everyone counterfactually believes that everyone else is rational (i.e., all players i believe that everyone else would still be rational even if i were to switch strategies), (3) everyone counterfactually believes that everyone else is rational, and counterfactually believes that everyone else is rational, and so on. CCBR characterizes the set of strategies surviving iterated removal of minimax dominated strategies: a strategy $sigma_i$ is minimax dominated for i if there exists a strategy $sigma_i$ for i such that $min_{mu_{-i}} u_i(sigma_i, mu_{-i}) > max_{mu_{-i}} u_i(sigma_i, mu_{-i})$.
We want to introduce another smoothing approach by treating each geometric element as a player in a game: a quest for the best element quality. In other words, each player has the goal of becoming as regular as possible. The set of strategies for each element is given by all translations of its vertices. Ideally, he would like to quantify this regularity using a quality measure which corresponds to the utility function in game theory. Each player is aware of the other players utility functions as well as their set of strategies, which is analogous to his own utility function and strategies. In the simplest case, the utility functions only depend on the regularity. In more complicated cases this utility function depends on the element size, the curvature, or even the solution to a differential equation. This article is a sketch of a possible game-theoretical approach to mesh smoothing and still on-going research.
We consider a facility location game in which $n$ agents reside at known locations on a path, and $k$ heterogeneous facilities are to be constructed on the path. Each agent is adversely affected by some subset of the facilities, and is unaffected by the others. We design two classes of mechanisms for choosing the facility locations given the reported agent preferences: utilitarian mechanisms that strive to maximize social welfare (i.e., to be efficient), and egalitarian mechanisms that strive to maximize the minimum welfare. For the utilitarian objective, we present a weakly group-strategyproof efficient mechanism for up to three facilities, we give a strongly group-strategyproof mechanism that guarantees at least half of the optimal social welfare for arbitrary $k$, and we prove that no strongly group-strategyproof mechanism achieves an approximation ratio of $5/4$ for one facility. For the egalitarian objective, we present a strategyproof egalitarian mechanism for arbitrary $k$, and we prove that no weakly group-strategyproof mechanism achieves a $o(sqrt{n})$ approximation ratio for two facilities. We extend our egalitarian results to the case where the agents are located on a cycle, and we extend our first egalitarian result to the case where the agents are located in the unit square.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا