ﻻ يوجد ملخص باللغة العربية
Ensuring trusted artificial intelligence (AI) in the real world is an critical challenge. A still largely unexplored task is the determination of the major factors within the real world that affect the behavior and robustness of a given AI module (e.g. weather or illumination conditions). Specifically, here we seek to discover the factors that cause AI systems to fail, and to mitigate their influence. The identification of these factors usually heavily relies on the availability of data that is diverse enough to cover numerous combinations of these factors, but the exhaustive collection of this data is onerous and sometimes impossible in complex environments. This paper investigates methods that discover and mitigate the effects of semantic sensitive factors within a given dataset. We also here generalize the definition of fairness, which normally only addresses socially relevant factors, and widen it to deal with -- more broadly -- the desensitization of AI systems with regard to all possible aspects of variation in the domain. The proposed methods which discover these major factors reduce the potentially onerous demands of collecting a sufficiently diverse dataset. In experiments using road sign (GTSRB) and facial imagery (CelebA) datasets, we show the promise of these new methods and show that they outperform state of the art approaches.
We study the problem of learning fair prediction models for unseen test sets distributed differently from the train set. Stability against changes in data distribution is an important mandate for responsible deployment of models. The domain adaptatio
Machine learning systems generally assume that the training and testing distributions are the same. To this end, a key requirement is to develop models that can generalize to unseen distributions. Domain generalization (DG), i.e., out-of-distribution
We propose a new clustering algorithm, Extended Affinity Propagation, based on pairwise similarities. Extended Affinity Propagation is developed by modifying Affinity Propagation such that the desirable features of Affinity Propagation, e.g., exempla
Reinforcement learning requires manual specification of a reward function to learn a task. While in principle this reward function only needs to specify the task goal, in practice reinforcement learning can be very time-consuming or even infeasible u
In this paper, we cast fair machine learning as invariant machine learning. We first formulate a version of individual fairness that enforces invariance on certain sensitive sets. We then design a transport-based regularizer that enforces this versio