ترغب بنشر مسار تعليمي؟ اضغط هنا

Algorithmic Auditing and Social Justice: Lessons from the History of Audit Studies

212   0   0.0 ( 0 )
 نشر من قبل Briana Vecchione
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Algorithmic audits have been embraced as tools to investigate the functioning and consequences of sociotechnical systems. Though the term is used somewhat loosely in the algorithmic context and encompasses a variety of methods, it maintains a close connection to audit studies in the social sciences--which have, for decades, used experimental methods to measure the prevalence of discrimination across domains like housing and employment. In the social sciences, audit studies originated in a strong tradition of social justice and participatory action, often involving collaboration between researchers and communities; but scholars have argued that, over time, social science audits have become somewhat distanced from these original goals and priorities. We draw from this history in order to highlight difficult tensions that have shaped the development of social science audits, and to assess their implications in the context of algorithmic auditing. In doing so, we put forth considerations to assist in the development of robust and engaged assessments of sociotechnical systems that draw from auditings roots in racial equity and social justice.

قيم البحث

اقرأ أيضاً

While digital social protection systems have been claimed to bring efficacy in user identification and entitlement assignation, their data justice implications have been questioned. In particular, the delivery of subsidies based on biometric identifi cation has been found to magnify exclusions, imply informational asymmetries, and reproduce policy structures that negatively affect recipients. In this paper, we use a data justice lens to study Rythu Bharosa, a social welfare scheme targeting farmers in the Andhra Pradesh state of India. While coverage of the scheme in terms of number of recipients is reportedly high, our fieldwork revealed three forms of data justice to be monitored for intended recipients. A first form is design-related, as mismatches of recipients with their registered biometric credentials and bank account details are associated to denial of subsidies. A second form is informational, as users who do not receive subsidies are often not informed of the reason why it is so, or of the grievance redressal processes available to them. To these dimensions our data add a structural one, centred on the conditionality of subsidy to approval by landowners, which forces tenant farmers to request a type of landowner consent that reproduces existing patterns of class and caste subordination. Identifying such data justice issues, the paper adds to problematisations of digital social welfare systems, contributing a structural dimension to studies of data justice in digital social protection.
94 - Boyan Durankev 2019
The link between taxation and justice is a classic debate issue, while also being very relevant at a time of changing environmental factors and conditions of the social and economic system. Technologically speaking, there are three types of taxes: pr ogressive, proportional and regressive. Although justice, like freedom, is an element and manifestation of the imagined reality in citizens minds, the state must comply with it. In particular, the tax system has to adapt to the mass imagined reality in order for it to appear fairer and more acceptable.
Rising concern for the societal implications of artificial intelligence systems has inspired a wave of academic and journalistic literature in which deployed systems are audited for harm by investigators from outside the organizations deploying the a lgorithms. However, it remains challenging for practitioners to identify the harmful repercussions of their own systems prior to deployment, and, once deployed, emergent issues can become difficult or impossible to trace back to their source. In this paper, we introduce a framework for algorithmic auditing that supports artificial intelligence system development end-to-end, to be applied throughout the internal organization development lifecycle. Each stage of the audit yields a set of documents that together form an overall audit report, drawing on an organizations values or principles to assess the fit of decisions made throughout the process. The proposed auditing framework is intended to contribute to closing the accountability gap in the development and deployment of large-scale artificial intelligence systems by embedding a robust process to ensure audit integrity.
Although essential to revealing biased performance, well intentioned attempts at algorithmic auditing can have effects that may harm the very populations these measures are meant to protect. This concern is even more salient while auditing biometric systems such as facial recognition, where the data is sensitive and the technology is often used in ethically questionable manners. We demonstrate a set of five ethical concerns in the particular case of auditing commercial facial processing technology, highlighting additional design considerations and ethical tensions the auditor needs to be aware of so as not exacerbate or complement the harms propagated by the audited system. We go further to provide tangible illustrations of these concerns, and conclude by reflecting on what these concerns mean for the role of the algorithmic audit and the fundamental product limitations they reveal.
We explore the commonalities between methods for assuring the security of computer systems (cybersecurity) and the mechanisms that have evolved through natural selection to protect vertebrates against pathogens, and how insights derived from studying the evolution of natural defenses can inform the design of more effective cybersecurity systems. More generally, security challenges are crucial for the maintenance of a wide range of complex adaptive systems, including financial systems, and again lessons learned from the study of the evolution of natural defenses can provide guidance for the protection of such systems.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا