ترغب بنشر مسار تعليمي؟ اضغط هنا

A cold, technical decision-maker: Can AI provide explainability, negotiability, and humanity?

84   0   0.0 ( 0 )
 نشر من قبل Patrick Kelley
 تاريخ النشر 2020
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Algorithmic systems are increasingly deployed to make decisions in many areas of peoples lives. The shift from human to algorithmic decision-making has been accompanied by concern about potentially opaque decisions that are not aligned with social values, as well as proposed remedies such as explainability. We present results of a qualitative study of algorithmic decision-making, comprised of five workshops conducted with a total of 60 participants in Finland, Germany, the United Kingdom, and the United States. We invited participants to reason about decision-making qualities such as explainability and accuracy in a variety of domains. Participants viewed AI as a decision-maker that follows rigid criteria and performs mechanical tasks well, but is largely incapable of subjective or morally complex judgments. We discuss participants consideration of humanity in decision-making, and introduce the concept of negotiability, the ability to go beyond formal criteria and work flexibly around the system.



قيم البحث

اقرأ أيضاً

The recent enthusiasm for artificial intelligence (AI) is due principally to advances in deep learning. Deep learning methods are remarkably accurate, but also opaque, which limits their potential use in safety-critical applications. To achieve trust and accountability, designers and operators of machine learning algorithms must be able to explain the inner workings, the results and the causes of failures of algorithms to users, regulators, and citizens. The originality of this paper is to combine technical, legal and economic aspects of explainability to develop a framework for defining the right level of explain-ability in a given context. We propose three logical steps: First, define the main contextual factors, such as who the audience of the explanation is, the operational context, the level of harm that the system could cause, and the legal/regulatory framework. This step will help characterize the operational and legal needs for explanation, and the corresponding social benefits. Second, examine the technical tools available, including post hoc approaches (input perturbation, saliency maps...) and hybrid AI approaches. Third, as function of the first two steps, choose the right levels of global and local explanation outputs, taking into the account the costs involved. We identify seven kinds of costs and emphasize that explanations are socially useful only when total social benefits exceed costs.
Decision making is critical in our daily lives and for society in general and is finding evermore practical applications in information and communication technologies. Herein, we demonstrate experimentally that single photons can be used to make deci sions in uncertain, dynamically changing environments. Using a nitrogen-vacancy in a nanodiamond as a single-photon source, we demonstrate the decision-making capability by solving the multi-armed bandit problem. This capability is directly and immediately associated with single-photon detection in the proposed architecture, leading to adequate and adaptive autonomous decision making. This study makes it possible to create systems that benefit from the quantum nature of light to perform practical and vital intelligent functions.
The competitive multi-armed bandit (CMAB) problem is related to social issues such as maximizing total social benefits while preserving equality among individuals by overcoming conflicts between individual decisions, which could seriously decrease so cial benefits. The study described herein provides experimental evidence that entangled photons physically resolve the CMAB in the 2-arms 2-players case, maximizing the social rewards while ensuring equality. Moreover, we demonstrated that deception, or outperforming the other player by receiving a greater reward, cannot be accomplished in a polarization-entangled-photon-based system, while deception is achievable in systems based on classical polarization-correlated photons with fixed polarizations. Besides, random polarization-correlated photons have been studied numerically and shown to ensure equality between players and deception prevention as well, although the CMAB maximum performance is reduced as compared with entangled photon experiments. Autonomous alignment schemes for polarization bases were also experimentally demonstrated based only on decision conflict information observed by an individual without communications between players. This study paves a way for collective decision making in uncertain dynamically changing environments based on entangled quantum states, a crucial step toward utilizing quantum systems for intelligent functionalities.
How to attribute responsibility for autonomous artificial intelligence (AI) systems actions has been widely debated across the humanities and social science disciplines. This work presents two experiments ($N$=200 each) that measure peoples perceptio ns of eight different notions of moral responsibility concerning AI and human agents in the context of bail decision-making. Using real-life adapted vignettes, our experiments show that AI agents are held causally responsible and blamed similarly to human agents for an identical task. However, there was a meaningful difference in how people perceived these agents moral responsibility; human agents were ascribed to a higher degree of present-looking and forward-looking notions of responsibility than AI agents. We also found that people expect both AI and human decision-makers and advisors to justify their decisions regardless of their nature. We discuss policy and HCI implications of these findings, such as the need for explainable AI in high-stakes scenarios.
381 - Song-Ju Kim , Masashi Aono 2015
The multi-armed bandit problem (MBP) is the problem of finding, as accurately and quickly as possible, the most profitable option from a set of options that gives stochastic rewards by referring to past experiences. Inspired by fluctuated movements o f a rigid body in a tug-of-war game, we formulated a unique search algorithm that we call the `tug-of-war (TOW) dynamics for solving the MBP efficiently. The cognitive medium access, which refers to multi-user channel allocations in cognitive radio, can be interpreted as the competitive multi-armed bandit problem (CMBP); the problem is to determine the optimal strategy for allocating channels to users which yields maximum total rewards gained by all users. Here we show that it is possible to construct a physical device for solving the CMBP, which we call the `TOW Bombe, by exploiting the TOW dynamics existed in coupled incompressible-fluid cylinders. This analog computing device achieves the `socially-maximum resource allocation that maximizes the total rewards in cognitive medium access without paying a huge computational cost that grows exponentially as a function of the problem size.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا