ترغب بنشر مسار تعليمي؟ اضغط هنا

Uncertainty as a Form of Transparency: Measuring, Communicating, and Using Uncertainty

103   0   0.0 ( 0 )
 نشر من قبل Umang Bhatt
 تاريخ النشر 2020
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Algorithmic transparency entails exposing system properties to various stakeholders for purposes that include understanding, improving, and contesting predictions. Until now, most research into algorithmic transparency has predominantly focused on explainability. Explainability attempts to provide reasons for a machine learning models behavior to stakeholders. However, understanding a models specific behavior alone might not be enough for stakeholders to gauge whether the model is wrong or lacks sufficient knowledge to solve the task at hand. In this paper, we argue for considering a complementary form of transparency by estimating and communicating the uncertainty associated with model predictions. First, we discuss methods for assessing uncertainty. Then, we characterize how uncertainty can be used to mitigate model unfairness, augment decision-making, and build trustworthy systems. Finally, we outline methods for displaying uncertainty to stakeholders and recommend how to collect information required for incorporating uncertainty into existing ML pipelines. This work constitutes an interdisciplinary review drawn from literature spanning machine learning, visualization/HCI, design, decision-making, and fairness. We aim to encourage researchers and practitioners to measure, communicate, and use uncertainty as a form of transparency.



قيم البحث

اقرأ أيضاً

In this paper, we describe an open source Python toolkit named Uncertainty Quantification 360 (UQ360) for the uncertainty quantification of AI models. The goal of this toolkit is twofold: first, to provide a broad range of capabilities to streamline as well as foster the common practices of quantifying, evaluating, improving, and communicating uncertainty in the AI application development lifecycle; second, to encourage further exploration of UQs connections to other pillars of trustworthy AI such as fairness and transparency through the dissemination of latest research and education materials. Beyond the Python package (url{https://github.com/IBM/UQ360}), we have developed an interactive experience (url{http://uq360.mybluemix.net}) and guidance materials as educational tools to aid researchers and developers in producing and communicating high-quality uncertainties in an effective manner.
AI models and services are used in a growing number of highstakes areas, resulting in a need for increased transparency. Consistent with this, several proposals for higher quality and more consistent documentation of AI data, models, and systems have emerged. Little is known, however, about the needs of those who would produce or consume these new forms of documentation. Through semi-structured developer interviews, and two document creation exercises, we have assembled a clearer picture of these needs and the various challenges faced in creating accurate and useful AI documentation. Based on the observations from this work, supplemented by feedback received during multiple design explorations and stakeholder conversations, we make recommendations for easing the collection and flexible presentation of AI facts to promote transparency.
Content moderation is often performed by a collaboration between humans and machine learning models. However, it is not well understood how to design the collaborative process so as to maximize the combined moderator-model system performance. This wo rk presents a rigorous study of this problem, focusing on an approach that incorporates model uncertainty into the collaborative process. First, we introduce principled metrics to describe the performance of the collaborative system under capacity constraints on the human moderator, quantifying how efficiently the combined system utilizes human decisions. Using these metrics, we conduct a large benchmark study evaluating the performance of state-of-the-art uncertainty models under different collaborative review strategies. We find that an uncertainty-based strategy consistently outperforms the widely used strategy based on toxicity scores, and moreover that the choice of review strategy drastically changes the overall system performance. Our results demonstrate the importance of rigorous metrics for understanding and developing effective moderator-model systems for content moderation, as well as the utility of uncertainty estimation in this domain.
Making conjectures about future consequences of a technology is an exercise in trying to reduce various forms of uncertainty. Both to produce and reason about these conjectures requires understanding their potential limitations. In other words, we ne ed systematic ways of considering uncertainty associated with given conjectures for downstream consequences. In this work, we frame the task of considering future consequences as an anticipatory ethics problem, where the goal is to develop scenarios that reflect plausible outcomes and their ethical implications following a technologys introduction into society. In order to shed light on how various forms of uncertainty might inform how we reason about a resulting scenario, we provide a characterization of the types of uncertainty that arise in a potential scenario-building process.
Background: It is possible to find many different visual representations of data values in visualizations, it is less common to see visual representations that include uncertainty, especially in visualizations intended for non-technical audiences. Ob jective: our aim is to rigorously define and evaluate the novel use of visual entropy as a measure of shape that allows us to construct an ordered scale of glyphs for use in representing both uncertainty and value in 2D and 3D environments. Method: We use sample entropy as a numerical measure of visual entropy to construct a set of glyphs using R and Blender which vary in their complexity. Results: A Bradley-Terry analysis of a pairwise comparison of the glyphs shows participants (n=19) ordered the glyphs as predicted by the visual entropy score (linear regression R2 >0.97, p<0.001). We also evaluate whether the glyphs can effectively represent uncertainty using a signal detection method, participants (n=15) were able to search for glyphs representing uncertainty with high sensitivity and low error rates. Conclusion: visual entropy is a novel cue for representing ordered data and provides a channel that allows the uncertainty of a measure to be presented alongside its mean value.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا