No Arabic abstract
Human Computer Symbiosis is similar to Human Computer Interaction in the sense that it is about how humans and computer interact with each other. For this interaction to be made there needs to be a symbiotic relationship between man and computer. Man can interact with computer in many ways, either just by typing with the keyboard or surfing the web. The cyber-physical-socio space is an important aspect to be looked into when referring to the interaction between man and computer. This paper investigates various aspects related to human computer symbiosis. Alongside the aspects related to the topic, this paper would also look into the limitations of Human Computer Symbiosis and evaluate some previously proposed solutions.
In this exploratory study, we examine the possibilities of non-invasive Brain-Computer Interface (BCI) in the context of Smart Home Technology (SHT) targeted at older adults. During two workshops, one stationary, and one online via Zoom, we researched the insights of the end users concerning the potential of the BCI in the SHT setting. We explored its advantages and drawbacks, and the features older adults see as vital as well as the ones that they would benefit from. Apart from evaluating the participants perception of such devices during the two workshops we also analyzed some key considerations resulting from the insights gathered during the workshops, such as potential barriers, ways to mitigate them, strengths and opportunities connected to BCI. These may be useful for designing BCI interaction paradigms and pinpointing areas of interest to pursue in further studies.
As the use of machine learning (ML) models in product development and data-driven decision-making processes became pervasive in many domains, peoples focus on building a well-performing model has increasingly shifted to understanding how their model works. While scholarly interest in model interpretability has grown rapidly in research communities like HCI, ML, and beyond, little is known about how practitioners perceive and aim to provide interpretability in the context of their existing workflows. This lack of understanding of interpretability as practiced may prevent interpretability research from addressing important needs, or lead to unrealistic solutions. To bridge this gap, we conducted 22 semi-structured interviews with industry practitioners to understand how they conceive of and design for interpretability while they plan, build, and use their models. Based on a qualitative analysis of our results, we differentiate interpretability roles, processes, goals and strategies as they exist within organizations making heavy use of ML models. The characterization of interpretability work that emerges from our analysis suggests that model interpretability frequently involves cooperation and mental model comparison between people in different roles, often aimed at building trust not only between people and models but also between people within the organization. We present implications for design that discuss gaps between the interpretability challenges that practitioners face in their practice and approaches proposed in the literature, highlighting possible research directions that can better address real-world needs.
Machine learning (ML) is increasingly being used in image retrieval systems for medical decision making. One application of ML is to retrieve visually similar medical images from past patients (e.g. tissue from biopsies) to reference when making a medical decision with a new patient. However, no algorithm can perfectly capture an experts ideal notion of similarity for every case: an image that is algorithmically determined to be similar may not be medically relevant to a doctors specific diagnostic needs. In this paper, we identified the needs of pathologists when searching for similar images retrieved using a deep learning algorithm, and developed tools that empower users to cope with the search algorithm on-the-fly, communicating what types of similarity are most important at different moments in time. In two evaluations with pathologists, we found that these refinement tools increased the diagnostic utility of images found and increased user trust in the algorithm. The tools were preferred over a traditional interface, without a loss in diagnostic accuracy. We also observed that users adopted new strategies when using refinement tools, re-purposing them to test and understand the underlying algorithm and to disambiguate ML errors from their own errors. Taken together, these findings inform future human-ML collaborative systems for expert decision-making.
While decision makers have begun to employ machine learning, machine learning models may make predictions that bias against certain demographic groups. Semi-automated bias detection tools often present reports of automatically-detected biases using a recommendation list or visual cues. However, there is a lack of guidance concerning which presentation style to use in what scenarios. We conducted a small lab study with 16 participants to investigate how presentation style might affect user behaviors in reviewing bias reports. Participants used both a prototype with a recommendation list and a prototype with visual cues for bias detection. We found that participants often wanted to investigate the performance measures that were not automatically detected as biases. Yet, when using the prototype with a recommendation list, they tended to give less consideration to such measures. Grounded in the findings, we propose information load and comprehensiveness as two axes for characterizing bias detection tasks and illustrate how the two axes could be adopted to reason about when to use a recommendation list or visual cues.
Data-driven decision-making consequential to individuals raises important questions of accountability and justice. Indeed, European law provides individuals limited rights to meaningful information about the logic behind significant, autonomous decisions such as loan approvals, insurance quotes, and CV filtering. We undertake three experimental studies examining peoples perceptions of justice in algorithmic decision-making under different scenarios and explanation styles. Dimensions of justice previously observed in response to human decision-making appear similarly engaged in response to algorithmic decisions. Qualitative analysis identified several concerns and heuristics involved in justice perceptions including arbitrariness, generalisation, and (in)dignity. Quantitative analysis indicates that explanation styles primarily matter to justice perceptions only when subjects are exposed to multiple different styles---under repeated exposure of one style, scenario effects obscure any explanation effects. Our results suggests there may be no best approach to explaining algorithmic decisions, and that reflection on their automated nature both implicates and mitigates justice dimensions.