Do you want to publish a course? Click here

Beyond opening up the black box: Investigating the role of algorithmic systems in Wikipedian organizational culture

62   0   0.0 ( 0 )
 Added by R.Stuart Geiger
 Publication date 2017
and research's language is English




Ask ChatGPT about the research

Scholars and practitioners across domains are increasingly concerned with algorithmic transparency and opacity, interrogating the values and assumptions embedded in automated, black-boxed systems, particularly in user-generated content platforms. I report from an ethnography of infrastructure in Wikipedia to discuss an often understudied aspect of this topic: the local, contextual, learned expertise involved in participating in a highly automated social-technical environment. Today, the organizational culture of Wikipedia is deeply intertwined with various data-driven algorithmic systems, which Wikipedians rely on to help manage and govern the anyone can edit encyclopedia at a massive scale. These bots, scripts, tools, plugins, and dashboards make Wikipedia more efficient for those who know how to work with them, but like all organizational culture, newcomers must learn them if they want to fully participate. I illustrate how cultural and organizational expertise is enacted around algorithmic agents by discussing two autoethnographic vignettes, which relate my personal experience as a veteran in Wikipedia. I present thick descriptions of how governance and gatekeeping practices are articulated through and in alignment with these automated infrastructures. Over the past 15 years, Wikipedian veterans and administrators have made specific decisions to support administrative and editorial workflows with automation in particular ways and not others. I use these cases of Wikipedias bot-supported bureaucracy to discuss several issues in the fields of critical algorithms studies, critical data studies, and fairness, accountability, and transparency in machine learning -- most principally arguing that scholarship and practice must go beyond trying to open up the black box of such systems and also examine sociocultural processes like newcomer socialization.

rate research

Read More

Conventional algorithmic fairness is West-centric, as seen in its sub-groups, values, and methods. In this paper, we de-center algorithmic fairness and analyse AI power in India. Based on 36 qualitative interviews and a discourse analysis of algorithmic deployments in India, we find that several assumptions of algorithmic fairness are challenged. We find that in India, data is not always reliable due to socio-economic factors, ML makers appear to follow double standards, and AI evokes unquestioning aspiration. We contend that localising model fairness alone can be window dressing in India, where the distance between models and oppressed communities is large. Instead, we re-imagine algorithmic fairness in India and provide a roadmap to re-contextualise data and models, empower oppressed communities, and enable Fair-ML ecosystems.
53 - Shiwali Mohan 2019
Our research aims to develop intelligent collaborative agents that are human-aware - they can model, learn, and reason about their human partners physiological, cognitive, and affective states. In this paper, we study how adaptive coaching interactions can be designed to help people develop sustainable healthy behaviors. We leverage the common model of cognition - CMC [26] - as a framework for unifying several behavior change theories that are known to be useful in human-human coaching. We motivate a set of interactive system desiderata based on the CMC-based view of behavior change. Then, we propose PARCoach - an interactive system that addresses the desiderata. PARCoach helps a trainee pick a relevant health goal, set an implementation intention, and track their behavior. During this process, the trainee identifies a specific goal-directed behavior as well as the situational context in which they will perform it. PARCcoach uses this information to send notifications to the trainee, reminding them of their chosen behavior and the context. We report the results from a 4-week deployment with 60 participants. Our results support the CMC-based view of behavior change and demonstrate that the desiderata for proposed interactive system design is useful in producing behavior change.
66 - Yuanbang Li 2021
With the widespread use of mobile phones, users can share their location and activity anytime, anywhere, as a form of check in data. These data reflect user features. Long term stable, and a set of user shared features can be abstracted as user roles. The role is closely related to the users social background, occupation, and living habits. This study provides four main contributions. Firstly, user feature models from different views for each user are constructed from the analysis of check in data. Secondly, K Means algorithm is used to discover user roles from user features. Thirdly, a reinforcement learning algorithm is proposed to strengthen the clustering effect of user roles and improve the stability of the clustering result. Finally, experiments are used to verify the validity of the method, the results of which show the effectiveness of the method.
105 - Jun Liu , Kai Mei , Dongtang Ma 2021
Deep Neural Network (DNN)-based physical layer techniques are attracting considerable interest due to their potential to enhance communication systems. However, most studies in the physical layer have tended to focus on the application of DNN models to wireless communication problems but not to theoretically understand how does a DNN work in a communication system. In this letter, we aim to quantitatively analyse why DNNs can achieve comparable performance in the physical layer comparing with traditional techniques and their cost in terms of computational complexity. We further investigate and also experimentally validate how information is flown in a DNN-based communication system under the information theoretic concepts.
Society increasingly relies on machine learning models for automated decision making. Yet, efficiency gains from automation have come paired with concern for algorithmic discrimination that can systematize inequality. Recent work has proposed optimal post-processing methods that randomize classification decisions for a fraction of individuals, in order to achieve fairness measures related to parity in errors and calibration. These methods, however, have raised concern due to the information inefficiency, intra-group unfairness, and Pareto sub-optimality they entail. The present work proposes an alternative active framework for fair classification, where, in deployment, a decision-maker adaptively acquires information according to the needs of different groups or individuals, towards balancing disparities in classification performance. We propose two such methods, where information collection is adapted to group- and individual-level needs respectively. We show on real-world datasets that these can achieve: 1) calibration and single error parity (e.g., equal opportunity); and 2) parity in both false positive and false negative rates (i.e., equal odds). Moreover, we show that by leveraging their additional degree of freedom, active approaches can substantially outperform randomization-based classifiers previously considered optimal, while avoiding limitations such as intra-group unfairness.

suggested questions

comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا