Do you want to publish a course? Click here

Relative Strength of Strategy Elimination Procedures

120   0   0.0 ( 0 )
 Added by Krzysztof R. Apt
 Publication date 2007
and research's language is English




Ask ChatGPT about the research

We compare here the relative strength of four widely used procedures on finite strategic games: iterated elimination of weakly/strictly dominated strategies by a pure/mixed strategy. A complication is that none of these procedures is based on a monotonic operator. To deal with this problem we use globa



rate research

Read More

We present a new insight into the systematic generation of minimal solvers in computer vision, which leads to smaller and faster solvers. Many minimal problem formulations are coupled sets of linear and polynomial equations where image measurements enter the linear equations only. We show that it is useful to solve such systems by first eliminating all the unknowns that do not appear in the linear equations and then extending solutions to the rest of unknowns. This can be generalized to fully non-linear systems by linearization via lifting. We demonstrate that this approach leads to more efficient solvers in three problems of partially calibrated relative camera pose computation with unknown focal length and/or radial distortion. Our approach also generates new interesting constraints on the fundamental matrices of partially calibrated cameras, which were not known before.
80 - Jan van Eijck 2013
Propositional Dynamic Logic or PDL was invented as a logic for reasoning about regular programming constructs. We propose a new perspective on PDL as a multi-agent strategic logic (MASL). This logic for strategic reasoning has group strategies as first class citizens, and brings game logic closer to standard modal logic. We demonstrate that MASL can express key notions of game theory, social choice theory and voting theory in a natural way, we give a sound and complete proof system for MASL, and we show that MASL encodes coalition logic. Next, we extend the language to epistemic multi-agent strategic logic (EMASL), we give examples of what it can express, we propose to use it for posing new questions in epistemic social choice theory, and we give a calculus for reasoning about a natural class of epistemic game models. We end by listing avenues for future research and by tracing connections to a number of other logics for reasoning about strategies.
Many two-sided matching markets, from labor markets to school choice programs, use a clearinghouse based on the applicant-proposing deferred acceptance algorithm, which is well known to be strategy-proof for the applicants. Nonetheless, a growing amount of empirical evidence reveals that applicants misrepresent their preferences when this mechanism is used. This paper shows that no mechanism that implements a stable matching is obviously strategy-proof for any side of the market, a stronger incentive property than strategy-proofness that was introduced by Li (2017). A stable mechanism that is obviously strategy-proof for applicants is introduced for the case in which agents on the other side have acyclical preferences.
ASTRI SST-2M is one of the prototypes of the small size class of telescopes proposed for the Cherenkov Telescope Array. Its optical design is based on a dual-mirror Schwarzschild-Couder configuration, and the camera is composed by a matrix of monolithic multipixel silicon photomultipliers managed by ad-hoc tailored front-end electronics. This paper describes the procedures for the gain calibration on the ASTRI SST-2M. Since the SiPM gain depends on the operative voltage and the temperature, we adjust the operative voltages for all sensors to have equal gains at a reference temperature. We then correct gain variations caused by temperature changes by adjusting the operating voltage of each sensor. For that purpose the SiPM gain dependence on operating voltage and on temperature have been measured. In addition, we present the calibration procedures and the results of the experimental measurements to evaluate, for each pixel, the parameters necessary to make the trigger uniform over the whole focal plane.
This article extends the idea of solving parity games by strategy iteration to non-deterministic strategies: In a non-deterministic strategy a player restricts himself to some non-empty subset of possible actions at a given node, instead of limiting himself to exactly one action. We show that a strategy-improvement algorithm by by Bjoerklund, Sandberg, and Vorobyov can easily be adapted to the more general setting of non-deterministic strategies. Further, we show that applying the heuristic of all profitable switches leads to choosing a locally optimal successor strategy in the setting of non-deterministic strategies, thereby obtaining an easy proof of an algorithm by Schewe. In contrast to the algorithm by Bjoerklund et al., we present our algorithm directly for parity games which allows us to compare it to the algorithm by Jurdzinski and Voege: We show that the valuations used in both algorithm coincide on parity game arenas in which one player can surrender. Thus, our algorithm can also be seen as a generalization of the one by Jurdzinski and Voege to non-deterministic strategies. Finally, using non-deterministic strategies allows us to show that the number of improvement steps is bound from above by O(1.724^n). For strategy-improvement algorithms, this bound was previously only known to be attainable by using randomization.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا