ترغب بنشر مسار تعليمي؟ اضغط هنا

Experiences with Improving the Transparency of AI Models and Services

402   0   0.0 ( 0 )
 نشر من قبل Michael Hind
 تاريخ النشر 2019
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

AI models and services are used in a growing number of highstakes areas, resulting in a need for increased transparency. Consistent with this, several proposals for higher quality and more consistent documentation of AI data, models, and systems have emerged. Little is known, however, about the needs of those who would produce or consume these new forms of documentation. Through semi-structured developer interviews, and two document creation exercises, we have assembled a clearer picture of these needs and the various challenges faced in creating accurate and useful AI documentation. Based on the observations from this work, supplemented by feedback received during multiple design explorations and stakeholder conversations, we make recommendations for easing the collection and flexible presentation of AI facts to promote transparency.

قيم البحث

اقرأ أيضاً

The Common Object Request Broker Architecture (CORBA) is successfully used in many control systems (CS) for data transfer and device modeling. Communication rates below 1 millisecond, high reliability, scalability, language independence and other fea tures make it very attractive. For common types of applications like error logging, alarm messaging or slow monitoring, one can benefit from standard CORBA services that are implemented by third parties and save tremendous amount of developing time. We have started using few CORBA services on our previous CORBA-based control system for the light source ANKA [1] and use now several CORBA services for the ALMA Common Software (ACS) [2], the core of the control system of the Atacama Large Millimeter Array. Our experiences with the interface repository (IFR), the implementation repository, the naming service, the property service, telecom log service and the notify service from different vendors are presented. Performance and scalability benchmarks have been performed.
Algorithmic transparency entails exposing system properties to various stakeholders for purposes that include understanding, improving, and contesting predictions. Until now, most research into algorithmic transparency has predominantly focused on ex plainability. Explainability attempts to provide reasons for a machine learning models behavior to stakeholders. However, understanding a models specific behavior alone might not be enough for stakeholders to gauge whether the model is wrong or lacks sufficient knowledge to solve the task at hand. In this paper, we argue for considering a complementary form of transparency by estimating and communicating the uncertainty associated with model predictions. First, we discuss methods for assessing uncertainty. Then, we characterize how uncertainty can be used to mitigate model unfairness, augment decision-making, and build trustworthy systems. Finally, we outline methods for displaying uncertainty to stakeholders and recommend how to collect information required for incorporating uncertainty into existing ML pipelines. This work constitutes an interdisciplinary review drawn from literature spanning machine learning, visualization/HCI, design, decision-making, and fairness. We aim to encourage researchers and practitioners to measure, communicate, and use uncertainty as a form of transparency.
Artificial intelligence shows promise for solving many practical societal problems in areas such as healthcare and transportation. However, the current mechanisms for AI model diffusion such as Github code repositories, academic project webpages, and commercial AI marketplaces have some limitations; for example, a lack of monetization methods, model traceability, and model auditabilty. In this work, we sketch guidelines for a new AI diffusion method based on a decentralized online marketplace. We consider the technical, economic, and regulatory aspects of such a marketplace including a discussion of solutions for problems in these areas. Finally, we include a comparative analysis of several current AI marketplaces that are already available or in development. We find that most of these marketplaces are centralized commercial marketplaces with relatively few models.
In aerospace and defense, training is being carried out on the web by viewing PowerPoint presentations, manuals and videos that are limited in their ability to convey information to the technician. Interactive training in the form of 3D is a more cos t effective approach compared to creation of physical simulations and mockups. This paper demonstrates how training using interactive 3D simulations in elearning achieves a reduction in the time spent in training and improves the efficiency of a trainee performing the installation or removal.
An Intelligent Tutoring System (ITS) has been shown to improve students learning outcomes by providing a personalized curriculum that addresses individual needs of every student. However, despite the effectiveness and efficiency that ITS brings to st udents learning process, most of the studies in ITS research have conducted less effort to design the interface of ITS that promotes students interest in learning, motivation and engagement by making better use of AI features. In this paper, we explore AI-driven design for the interface of ITS describing diagnostic feedback for students problem-solving process and investigate its impacts on their engagement. We propose several interface designs powered by different AI components and empirically evaluate their impacts on student engagement through Santa, an active mobile ITS. Controlled A/B tests conducted on more than 20K students in the wild show that AI-driven interface design improves the factors of engagement by up to 25.13%.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا