ترغب بنشر مسار تعليمي؟ اضغط هنا

Human Factors in Model Interpretability: Industry Practices, Challenges, and Needs

105   0   0.0 ( 0 )
 نشر من قبل Sungsoo (Ray) Hong
 تاريخ النشر 2020
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

As the use of machine learning (ML) models in product development and data-driven decision-making processes became pervasive in many domains, peoples focus on building a well-performing model has increasingly shifted to understanding how their model works. While scholarly interest in model interpretability has grown rapidly in research communities like HCI, ML, and beyond, little is known about how practitioners perceive and aim to provide interpretability in the context of their existing workflows. This lack of understanding of interpretability as practiced may prevent interpretability research from addressing important needs, or lead to unrealistic solutions. To bridge this gap, we conducted 22 semi-structured interviews with industry practitioners to understand how they conceive of and design for interpretability while they plan, build, and use their models. Based on a qualitative analysis of our results, we differentiate interpretability roles, processes, goals and strategies as they exist within organizations making heavy use of ML models. The characterization of interpretability work that emerges from our analysis suggests that model interpretability frequently involves cooperation and mental model comparison between people in different roles, often aimed at building trust not only between people and models but also between people within the organization. We present implications for design that discuss gaps between the interpretability challenges that practitioners face in their practice and approaches proposed in the literature, highlighting possible research directions that can better address real-world needs.

قيم البحث

اقرأ أيضاً

To ensure accountability and mitigate harm, it is critical that diverse stakeholders can interrogate black-box automated systems and find information that is understandable, relevant, and useful to them. In this paper, we eschew prior expertise- and role-based categorizations of interpretability stakeholders in favor of a more granular framework that decouples stakeholders knowledge from their interpretability needs. We characterize stakeholders by their formal, instrumental, and personal knowledge and how it manifests in the contexts of machine learning, the data domain, and the general milieu. We additionally distill a hierarchical typology of stakeholder needs that distinguishes higher-level domain goals from lower-level interpretability tasks. In assessing the descriptive, evaluative, and generative powers of our framework, we find our more nuanced treatment of stakeholders reveals gaps and opportunities in the interpretability literature, adds precision to the design and comparison of user studies, and facilitates a more reflexive approach to conducting this research.
Learning behavior mechanism is widely anticipated in managed settings through the formal syllabus. However, heading for learning stimulus whilst daily mobility practices through urban transit is the novel feature in learning sciences. Theory of plann ed behavior (TPB), technology acceptance model (TAM), and service quality of transit are conceptualized to assess the learning behavioral intention (LBI) of commuters in Greater Kuala Lumpur. An online survey was conducted to understand the LBI of 117 travelers who use the technology to engage in the informal learning process during daily commuting. The results explored that all the model variables i.e., perceived ease of use, perceived usefulness, service quality, and subjective norms are significant predictors of LBI. The perceived usefulness of learning during traveling and transit service quality has a vibrant impact on LBI. The research will support the informal learning mechanism from commuters point of view. The study is a novel contribution to transport and learning literature that will open the new prospect of research in urban mobility and its connotation with personal learning and development.
The automated analysis of digital human communication data often focuses on specific aspects like content or network structure in isolation, while classical communication research stresses the importance of a holistic analysis approach. This work aim s to formalize digital communication analysis and investigate how classical results can be leveraged as part of visually interactive systems, which offers new analysis opportunities to allow for less biased, skewed, or incomplete results. For this, we construct a conceptual framework and design space based on the existing research landscape, technical considerations, and communication research that describes the properties, capabilities, and composition of such systems through 30 criteria in four analysis dimensions. We make the case how visual analytics principles are uniquely suited for a more holistic approach by tackling the automation complexity and leverage domain knowledge, paving the way to generate design guidelines for building such approaches. Our framework provides a common language and description of communication analysis systems to support existing research, highlights relevant design areas while promoting and supporting the mutual exchange between researchers. Additionally, our framework identifies existing gaps and highlights opportunities in research areas that are worth investigating further. With this contribution, we pave the path for the formalization of digital communication analysis through visual analytics.
288 - Po-Ming Law , Sana Malik , Fan Du 2020
While decision makers have begun to employ machine learning, machine learning models may make predictions that bias against certain demographic groups. Semi-automated bias detection tools often present reports of automatically-detected biases using a recommendation list or visual cues. However, there is a lack of guidance concerning which presentation style to use in what scenarios. We conducted a small lab study with 16 participants to investigate how presentation style might affect user behaviors in reviewing bias reports. Participants used both a prototype with a recommendation list and a prototype with visual cues for bias detection. We found that participants often wanted to investigate the performance measures that were not automatically detected as biases. Yet, when using the prototype with a recommendation list, they tended to give less consideration to such measures. Grounded in the findings, we propose information load and comprehensiveness as two axes for characterizing bias detection tasks and illustrate how the two axes could be adopted to reason about when to use a recommendation list or visual cues.
The ability to interpret machine learning models has become increasingly important now that machine learning is used to inform consequential decisions. We propose an approach called model extraction for interpreting complex, blackbox models. Our appr oach approximates the complex model using a much more interpretable model; as long as the approximation quality is good, then statistical properties of the complex model are reflected in the interpretable model. We show how model extraction can be used to understand and debug random forests and neural nets trained on several datasets from the UCI Machine Learning Repository, as well as control policies learned for several classical reinforcement learning problems.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا