Do you want to publish a course? Click here

Diversity in Sociotechnical Machine Learning Systems

417   0   0.0 ( 0 )
 Added by Maria De-Arteaga
 Publication date 2021
and research's language is English




Ask ChatGPT about the research

There has been a surge of recent interest in sociocultural diversity in machine learning (ML) research, with researchers (i) examining the benefits of diversity as an organizational solution for alleviating problems with algorithmic bias, and (ii) proposing measures and methods for implementing diversity as a design desideratum in the construction of predictive algorithms. Currently, however, there is a gap between discussions of measures and benefits of diversity in ML, on the one hand, and the broader research on the underlying concepts of diversity and the precise mechanisms of its functional benefits, on the other. This gap is problematic because diversity is not a monolithic concept. Rather, different concepts of diversity are based on distinct rationales that should inform how we measure diversity in a given context. Similarly, the lack of specificity about the precise mechanisms underpinning diversitys potential benefits can result in uninformative generalities, invalid experimental designs, and illicit interpretations of findings. In this work, we draw on research in philosophy, psychology, and social and organizational sciences to make three contributions: First, we introduce a taxonomy of different diversity concepts from philosophy of science, and explicate the distinct epistemic and political rationales underlying these concepts. Second, we provide an overview of mechanisms by which diversity can benefit group performance. Third, we situate these taxonomies--of concepts and mechanisms--in the lifecycle of sociotechnical ML systems and make a case for their usefulness in fair and accountable ML. We do so by illustrating how they clarify the discourse around diversity in the context of ML systems, promote the formulation of more precise research questions about diversitys impact, and provide conceptual tools to further advance research and practice.



rate research

Read More

122 - Tad Hogg , Gabor Szabo 2008
Web sites where users create and rate content as well as form networks with other users display long-tailed distributions in many aspects of behavior. Using behavior on one such community site, Essembly, we propose and evaluate plausible mechanisms to explain these behaviors. Unlike purely descriptive models, these mechanisms rely on user behaviors based on information available locally to each user. For Essembly, we find the long-tails arise from large differences among user activity rates and qualities of the rated content, as well as the extensive variability in the time users devote to the site. We show that the models not only explain overall behavior but also allow estimating the quality of content from their early behaviors.
Programming education is becoming important as demands on computer literacy and coding skills are growing. Despite the increasing popularity of interactive online learning systems, many programming courses in schools have not changed their teaching format from the conventional classroom setting. We see two research opportunities here. Students may have diverse expertise and experience in programming. Thus, particular content and teaching speed can be disengaging for experienced students or discouraging for novice learners. In a large classroom, instructors cannot oversee the learning progress of each student, and have difficulty matching teaching materials with the comprehension level of individual students. We present ClassCode, a web-based environment tailored to programming education in classrooms. Students can take online tutorials prepared by instructors at their own pace. They can then deepen their understandings by performing interactive coding exercises interleaved within tutorials. ClassCode tracks all interactions by each student, and summarizes them to instructors. This serves as a progress report, facilitating the instructors to provide additional explanations in-situ or revise course materials. Our user evaluation through a small lecture and expert review by instructors and teaching assistants confirm the potential of ClassCode by uncovering how it could address issues in existing programming courses at universities.
Labelling user data is a central part of the design and evaluation of pervasive systems that aim to support the user through situation-aware reasoning. It is essential both in designing and training the system to recognise and reason about the situation, either through the definition of a suitable situation model in knowledge-driven applications, or through the preparation of training data for learning tasks in data-driven models. Hence, the quality of annotations can have a significant impact on the performance of the derived systems. Labelling is also vital for validating and quantifying the performance of applications. In particular, comparative evaluations require the production of benchmark datasets based on high-quality and consistent annotations. With pervasive systems relying increasingly on large datasets for designing and testing models of users activities, the process of data labelling is becoming a major concern for the community. In this work we present a qualitative and quantitative analysis of the challenges associated with annotation of user data and possible strategies towards addressing these challenges. The analysis was based on the data gathered during the 1st International Workshop on Annotation of useR Data for UbiquitOUs Systems (ARDUOUS) and consisted of brainstorming as well as annotation and questionnaire data gathered during the talks, poster session, live annotation session, and discussion session.
We present an automated method for measuring media bias. Inferring which newspaper published a given article, based only on the frequencies with which it uses different phrases, leads to a conditional probability distribution whose analysis lets us automatically map newspapers and phrases into a bias space. By analyzing roughly a million articles from roughly a hundred newspapers for bias in dozens of news topics, our method maps newspapers into a two-dimensional bias landscape that agrees well with previous bias classifications based on human judgement. One dimension can be interpreted as traditional left-right bias, the other as establishment bias. This means that although news bias is inherently political, its measurement need not be.
387 - Kyla Chasalow , Karen Levy 2021
Representativeness is a foundational yet slippery concept. Though familiar at first blush, it lacks a single precise meaning. Instead, meanings range from typical or characteristic, to a proportionate match between sample and population, to a more general sense of accuracy, generalizability, coverage, or inclusiveness. Moreover, the concept has long been contested. In statistics, debates about the merits and methods of selecting a representative sample date back to the late 19th century; in politics, debates about the value of likeness as a logic of political representation are older still. Today, as the concept crops up in the study of fairness and accountability in machine learning, we need to carefully consider the terms meanings in order to communicate clearly and account for their normative implications. In this paper, we ask what representativeness means, how it is mobilized socially, and what values and ideals it communicates or confronts. We trace the concepts history in statistics and discuss normative tensions concerning its relationship to likeness, exclusion, authority, and aspiration. We draw on these analyses to think through how representativeness is used in FAccT debates, with emphasis on data, shift, participation, and power.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا