Do you want to publish a course? Click here

Reasoning in Systems with Elements that Randomly Switch Characteristics

48   0   0.0 ( 0 )
 Added by Subhash Kak
 Publication date 2017
and research's language is English
 Authors Subhash Kak




Ask ChatGPT about the research

We examine the issue of stability of probability in reasoning about complex systems with uncertainty in structure. Normally, propositions are viewed as probability functions on an abstract random graph where it is implicitly assumed that the nodes of the graph have stable properties. But what if some of the nodes change their characteristics? This is a situation that cannot be covered by abstractions of either static or dynamic sets when these changes take place at regular intervals. We propose the use of sets with elements that change, and modular forms are proposed to account for one type of such change. An expression for the dependence of the mean on the probability of the switching elements has been determined. The system is also analyzed from the perspective of decision between different hypotheses. Such sets are likely to be of use in complex system queries and in analysis of surveys.



rate research

Read More

84 - Subhash Kak 2017
Data based judgments go into artificial intelligence applications but they undergo paradoxical reversal when seemingly unnecessary additional data is provided. Examples of this are Simpsons reversal and the disjunction effect where the beliefs about the data change once it is presented or aggregated differently. Sometimes the significance of the difference can be evaluated using statistical tests such as Pearsons chi-squared or Fishers exact test, but this may not be helpful in threshold-based decision systems that operate with incomplete information. To mitigate risks in the use of algorithms in decision-making, we consider the question of modeling of beliefs. We argue that evidence supports that beliefs are not classical statistical variables and they should, in the general case, be considered as superposition states of disjoint or polar outcomes. We analyze the disjunction effect from the perspective of the belief as a quantum vector.
This paper is an appendix to the paper Reasoning with Justifiable Exceptions in Contextual Hierarchies by Bozzato, Serafini and Eiter, 2018. It provides further details on the language, the complexity results and the datalog translation introduced in the main paper.
It is clear that one of the primary tools we can use to mitigate the potential risk from a misbehaving AI system is the ability to turn the system off. As the capabilities of AI systems improve, it is important to ensure that such systems do not adopt subgoals that prevent a human from switching them off. This is a challenge because many formulations of rational agents create strong incentives for self-preservation. This is not caused by a built-in instinct, but because a rational agent will maximize expected utility and cannot achieve whatever objective it has been given if it is dead. Our goal is to study the incentives an agent has to allow itself to be switched off. We analyze a simple game between a human H and a robot R, where H can press Rs off switch but R can disable the off switch. A traditional agent takes its reward function for granted: we show that such agents have an incentive to disable the off switch, except in the special case where H is perfectly rational. Our key insight is that for R to want to preserve its off switch, it needs to be uncertain about the utility associated with the outcome, and to treat Hs actions as important observations about that utility. (R also has no incentive to switch itself off in this setting.) We conclude that giving machines an appropriate level of uncertainty about their objectives leads to safer designs, and we argue that this setting is a useful generalization of the classical AI paradigm of rational agents.
69 - Toby Walsh 2020
Mathematics is not a careful march down a well-cleared highway, but a journey into a strange wilderness, where the explorers often get lost. Rigour should be a signal to the historian that the maps have been made, and the real explorers have gone elsewhere. W.S. Anglin, the Mathematical Intelligencer, 4 (4), 1982.
55 - Subhash Kak 2018
The paper analyzes the problem of judgments or preferences subsequent to initial analysis by autonomous agents in a hierarchical system where the higher level agents does not have access to group size information. We propose methods that reduce instances of preference reversal of the kind encountered in Simpsons paradox.

suggested questions

comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا