ترغب بنشر مسار تعليمي؟ اضغط هنا

Reasoning about Norms Revision

305   0   0.0 ( 0 )
 نشر من قبل Davide Dell'Anna
 تاريخ النشر 2018
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Norms with sanctions have been widely employed as a mechanism for controlling and coordinating the behavior of agents without limiting their autonomy. The norms enforced in a multi-agent system can be revised in order to increase the likelihood that desirable system properties are fulfilled or that system performance is sufficiently high. In this paper, we provide a preliminary analysis of some types of norm revision: relaxation and strengthening. Furthermore, with the help of some illustrative scenarios, we show the usefulness of norm revision for better satisfying the overall system objectives.

قيم البحث

اقرأ أيضاً

Many cybersecurity breaches occur due to users not following good cybersecurity practices, chief among them being regulations for applying software patches to operating systems, updating applications, and maintaining strong passwords. We capture cy bersecurity expectations on users as norms. We empirically investigate sanctioning mechanisms in promoting compliance with those norms as well as the detrimental effect of sanctions on the ability of users to complete their work. We realize these ideas in a game that emulates the decision making of workers in a research lab. Through a human-subject study, we find that whereas individual sanctions are more effective than group sanctions in achieving compliance and less detrimental on the ability of users to complete their work, individual sanctions offer significantly lower resilience especially for organizations comprising risk seekers. Our findings have implications for workforce training in cybersecurity.
103 - George Vouros 2015
This paper explores the emergence of norms in agents societies when agents play multiple -even incompatible- roles in their social contexts simultaneously, and have limited interaction ranges. Specifically, this article proposes two reinforcement lea rning methods for agents to compute agreements on strategies for using common resources to perform joint tasks. The computation of norms by considering agents playing multiple roles in their social contexts has not been studied before. To make the problem even more realistic for open societies, we do not assume that agents share knowledge on their common resources. So, they have to compute semantic agreements towards performing their joint actions. %The paper reports on an empirical study of whether and how efficiently societies of agents converge to norms, exploring the proposed social learning processes w.r.t. different society sizes, and the ways agents are connected. The results reported are very encouraging, regarding the speed of the learning process as well as the convergence rate, even in quite complex settings.
Most modern (classical) programming languages support recursion. Recursion has also been successfully applied to the design of several quantum algorithms and introduced in a couple of quantum programming languages. So, it can be expected that recursi on will become one of the fundamental paradigms of quantum programming. Several program logics have been developed for verification of quantum while-programs. However, there are as yet no general methods for reasoning about (mutual) recursive procedures and ancilla quantum data structure in quantum computing (with measurement). We fill the gap in this paper by proposing a parameterized quantum assertion logic and, based on which, designing a quantum Hoare logic for verifying parameterized recursive quantum programs with ancilla data and probabilistic control. The quantum Hoare logic can be used to prove partial, total, and even probabilistic correctness (by reducing to total correctness) of those quantum programs. In particular, two counterexamples for illustrating incompleteness of non-parameterized assertions in verifying recursive procedures, and, one counterexample for showing the failure of reasoning with exact probabilities based on partial correctness, are constructed. The effectiveness of our logic is shown by three main examples -- recursive quantum Markov chain (with probabilistic control), fixed-point Grovers search, and recursive quantum Fourier sampling.
As a contribution to the challenge of building game-playing AI systems, we develop and analyse a formal language for representing and reasoning about strategies. Our logical language builds on the existing general Game Description Language (GDL) and extends it by a standard modality for linear time along with two dual connectives to express preferences when combining strategies. The semantics of the language is provided by a standard state-transition model. As such, problems that require reasoning about games can be solved by the standard methods for reasoning about actions and change. We also endow the language with a specific semantics by which strategy formulas are understood as move recommendations for a player. To illustrate how our formalism supports automated reasoning about strategies, we demonstrate two example methods of implementation/: first, we formalise the semantic interpretation of our language in conjunction with game rules and strategy rules in the Situation Calculus; second, we show how the reasoning problem can be solved with Answer Set Programming.
279 - R. Ramanujam 2021
We offer a very simple model of how collective memory may form. Agents keep signalling within neighbourhoods, and depending on how many support each signal, some signals win in that neighbourhood. By agents interacting between different neighbourhood s, influence spreads and sometimes, a collective signal emerges. We propose a logic in which we can reason about such emergence of memory and present preliminary technical results on the logic.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا