ترغب بنشر مسار تعليمي؟ اضغط هنا

Soliciting Stakeholders Fairness Notions in Child Maltreatment Predictive Systems

60   0   0.0 ( 0 )
 نشر من قبل Hao-Fei Cheng
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Recent work in fair machine learning has proposed dozens of technical definitions of algorithmic fairness and methods for enforcing these definitions. However, we still lack an understanding of how to develop machine learning systems with fairness criteria that reflect relevant stakeholders nuanced viewpoints in real-world contexts. To address this gap, we propose a framework for eliciting stakeholders subjective fairness notions. Combining a user interface that allows stakeholders to examine the data and the algorithms predictions with an interview protocol to probe stakeholders thoughts while they are interacting with the interface, we can identify stakeholders fairness beliefs and principles. We conduct a user study to evaluate our framework in the setting of a child maltreatment predictive system. Our evaluations show that the framework allows stakeholders to comprehensively convey their fairness viewpoints. We also discuss how our results can inform the design of predictive systems.



قيم البحث

اقرأ أيضاً

We propose a method to test for the presence of differential ascertainment in case-control studies, when data are collected by multiple sources. We show that, when differential ascertainment is present, the use of only the observed cases leads to sev ere bias in the computation of the odds ratio. We can alleviate the effect of such bias using the estimates that our method of testing for differential ascertainment naturally provides. We apply it to a dataset obtained from the National Violent Death Reporting System, with the goal of checking for the presence of differential ascertainment by race in the count of deaths caused by child maltreatment.
ML-based predictive systems are increasingly used to support decisions with a critical impact on individuals lives such as college admission, job hiring, child custody, criminal risk assessment, etc. As a result, fairness emerged as an important requ irement to guarantee that predictive systems do not discriminate against specific individuals or entire sub-populations, in particular, minorities. Given the inherent subjectivity of viewing the concept of fairness, several notions of fairness have been introduced in the literature. This paper is a survey of fairness notions that, unlike other surveys in the literature, addresses the question of which notion of fairness is most suited to a given real-world scenario and why?. Our attempt to answer this question consists in (1) identifying the set of fairness-related characteristics of the real-world scenario at hand, (2) analyzing the behavior of each fairness notion, and then (3) fitting these two elements to recommend the most suitable fairness notion in every specific setup. The results are summarized in a decision diagram that can be used by practitioners and policy makers to navigate the relatively large catalogue of fairness notions.
Addressing the problem of fairness is crucial to safely use machine learning algorithms to support decisions with a critical impact on peoples lives such as job hiring, child maltreatment, disease diagnosis, loan granting, etc. Several notions of fai rness have been defined and examined in the past decade, such as, statistical parity and equalized odds. The most recent fairness notions, however, are causal-based and reflect the now widely accepted idea that using causality is necessary to appropriately address the problem of fairness. This paper examines an exhaustive list of causal-based fairness notions, in particular their applicability in real-world scenarios. As the majority of causal-based fairness notions are defined in terms of non-observable quantities (e.g. interventions and counterfactuals), their applicability depends heavily on the identifiability of those quantities from observational data. In this paper, we compile the most relevant identifiability criteria for the problem of fairness from the extensive literature on identifiability theory. These criteria are then used to decide about the applicability of causal-based fairness notions in concrete discrimination scenarios.
To ensure accountability and mitigate harm, it is critical that diverse stakeholders can interrogate black-box automated systems and find information that is understandable, relevant, and useful to them. In this paper, we eschew prior expertise- and role-based categorizations of interpretability stakeholders in favor of a more granular framework that decouples stakeholders knowledge from their interpretability needs. We characterize stakeholders by their formal, instrumental, and personal knowledge and how it manifests in the contexts of machine learning, the data domain, and the general milieu. We additionally distill a hierarchical typology of stakeholder needs that distinguishes higher-level domain goals from lower-level interpretability tasks. In assessing the descriptive, evaluative, and generative powers of our framework, we find our more nuanced treatment of stakeholders reveals gaps and opportunities in the interpretability literature, adds precision to the design and comparison of user studies, and facilitates a more reflexive approach to conducting this research.
Today, children are increasingly connected to the Internet and consume content and services through various means. It has been a challenge for less tech-savvy parents to protect children from harmful content and services. Internet of Things (IoT) has made the situation much worse as IoT devices allow children to connect to the Internet in novel ways (e.g., connected refrigerators, TVs, and so on). In this paper, we propose mySafeHome, an approach which utilises family dynamics to provide a more natural, and intuitive access control mechanism to protect children from harmful content and services in the context of IoT. In mySafeHome, access control dynamically adapts based on the physical distance between family members. For example, a particular type of content can only be consumed, through TV, by children if the parents are in the same room (or hearing distance). mySafeHome allows parents to assess a given content by themselves. Our approach also aims to create granular levels of access control (e.g., block / limit certain content, features, services, on certain devices when the parents are not in the vicinity). We developed a prototype using OpenHAB and several smart home devices to demonstrate the proposed approach. We believe that our approach also facilitates the creation of better relationships between family members. A demo can be viewed here: http://safehome.technology/demo.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا