ترغب بنشر مسار تعليمي؟ اضغط هنا

Searching for Representation: A sociotechnical audit of googling for members of U.S. Congress

54   0   0.0 ( 0 )
 نشر من قبل Emma Lurie
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

High-quality online civic infrastructure is increasingly critical for the success of democratic processes. There is a pervasive reliance on search engines to find facts and information necessary for political participation and oversight. We find that approximately 10% of the top Google search results are likely to mislead California information seekers who use search to identify their congressional representatives. 70% of the misleading results appear in featured snippets above the organic search results. We use both qualitative and quantitative methods to understand what aspects of the information ecosystem lead to this sociotechnical breakdown. Factors identified include Googles heavy reliance on Wikipedia, the lack of authoritative, machine parsable, high accuracy data about the identity of elected officials based on geographic location, and the search engines treatment of under-specified queries. We recommend steps that Google can take to meet its stated commitment to providing high quality civic information, and steps that information providers can take to improve the legibility and quality of information about congressional representatives available to search algorithms.

قيم البحث

اقرأ أيضاً

205 - Ian Sommerville 2014
This article discusses the difficulties that arose when attempting to specify and design a large scale digital learning environment for Scottish schools. This had a potential user base of about 1 million users and was intended to replace an existing, under-used system. We found that the potential system users were not interested in engaging with the project and that there were immense problems with system governance. The only technique that we found to be useful were user stories, presenting scenarios of how the system might be used by students and their teachers. The designed architecture was based around a layered set of replaceable services.
There has been a surge of recent interest in sociocultural diversity in machine learning (ML) research, with researchers (i) examining the benefits of diversity as an organizational solution for alleviating problems with algorithmic bias, and (ii) pr oposing measures and methods for implementing diversity as a design desideratum in the construction of predictive algorithms. Currently, however, there is a gap between discussions of measures and benefits of diversity in ML, on the one hand, and the broader research on the underlying concepts of diversity and the precise mechanisms of its functional benefits, on the other. This gap is problematic because diversity is not a monolithic concept. Rather, different concepts of diversity are based on distinct rationales that should inform how we measure diversity in a given context. Similarly, the lack of specificity about the precise mechanisms underpinning diversitys potential benefits can result in uninformative generalities, invalid experimental designs, and illicit interpretations of findings. In this work, we draw on research in philosophy, psychology, and social and organizational sciences to make three contributions: First, we introduce a taxonomy of different diversity concepts from philosophy of science, and explicate the distinct epistemic and political rationales underlying these concepts. Second, we provide an overview of mechanisms by which diversity can benefit group performance. Third, we situate these taxonomies--of concepts and mechanisms--in the lifecycle of sociotechnical ML systems and make a case for their usefulness in fair and accountable ML. We do so by illustrating how they clarify the discourse around diversity in the context of ML systems, promote the formulation of more precise research questions about diversitys impact, and provide conceptual tools to further advance research and practice.
136 - Veljko Dubljevic 2021
The expansion of artificial intelligence (AI) and autonomous systems has shown the potential to generate enormous social good while also raising serious ethical and safety concerns. AI technology is increasingly adopted in transportation. A survey of various in-vehicle technologies found that approximately 64% of the respondents used a smartphone application to assist with their travel. The top-used applications were navigation and real-time traffic information systems. Among those who used smartphones during their commutes, the top-used applications were navigation and entertainment. There is a pressing need to address relevant social concerns to allow for the development of systems of intelligent agents that are informed and cognizant of ethical standards. Doing so will facilitate the responsible integration of these systems in society. To this end, we have applied Multi-Criteria Decision Analysis (MCDA) to develop a formal Multi-Attribute Impact Assessment (MAIA) questionnaire for examining the social and ethical issues associated with the uptake of AI. We have focused on the domain of autonomous vehicles (AVs) because of their imminent expansion. However, AVs could serve as a stand-in for any domain where intelligent, autonomous agents interact with humans, either on an individual level (e.g., pedestrians, passengers) or a societal level.
Algorithmic audits have been embraced as tools to investigate the functioning and consequences of sociotechnical systems. Though the term is used somewhat loosely in the algorithmic context and encompasses a variety of methods, it maintains a close c onnection to audit studies in the social sciences--which have, for decades, used experimental methods to measure the prevalence of discrimination across domains like housing and employment. In the social sciences, audit studies originated in a strong tradition of social justice and participatory action, often involving collaboration between researchers and communities; but scholars have argued that, over time, social science audits have become somewhat distanced from these original goals and priorities. We draw from this history in order to highlight difficult tensions that have shaped the development of social science audits, and to assess their implications in the context of algorithmic auditing. In doing so, we put forth considerations to assist in the development of robust and engaged assessments of sociotechnical systems that draw from auditings roots in racial equity and social justice.
We describe and implement a policy language. In our system, agents can distribute data along with usage policies in a decentralized architecture. Our language supports the specification of conditions and obligations, and also the possibility to refin e policies. In our framework, the compliance with usage policies is not actively enforced. However, agents are accountable for their actions, and may be audited by an authority requiring justifications.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا