ترغب بنشر مسار تعليمي؟ اضغط هنا

Detection of Racial Bias from Physiological Responses

95   0   0.0 ( 0 )
 نشر من قبل Fateme Nikseresht
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Despite the evolution of norms and regulations to mitigate the harm from biases, harmful discrimination linked to an individuals unconscious biases persists. Our goal is to better understand and detect the physiological and behavioral indicators of implicit biases. This paper investigates whether we can reliably detect racial bias from physiological responses, including heart rate, conductive skin response, skin temperature, and micro-body movements. We analyzed data from 46 subjects whose physiological data was collected with Empatica E4 wristband while taking an Implicit Association Test (IAT). Our machine learning and statistical analysis show that implicit bias can be predicted from physiological signals with 76.1% accuracy. Our results also show that the EDA signal associated with skin response has the strongest correlation with racial bias and that there are significant differences between the values of EDA features for biased and unbiased participants.

قيم البحث

اقرأ أيضاً

In current hate speech datasets, there exists a high correlation between annotators perceptions of toxicity and signals of African American English (AAE). This bias in annotated training data and the tendency of machine learning models to amplify it cause AAE text to often be mislabeled as abusive/offensive/hate speech with a high false positive rate by current hate speech classifiers. In this paper, we use adversarial training to mitigate this bias, introducing a hate speech classifier that learns to detect toxic sentences while demoting confounds corresponding to AAE texts. Experimental results on a hate speech dataset and an AAE dataset suggest that our method is able to substantially reduce the false positive rate for AAE text while only minimally affecting the performance of hate speech classification.
To support the 2019 U.S. Supreme Court case Flowers v. Mississippi, APM Reports collated historical court records to assess whether the State exhibited a racial bias in striking potential jurors. This analysis used backward stepwise logistic regressi on to conclude that race was a significant factor, however this method for selecting relevant features is only a heuristic, and additionally cannot consider interactions between features. We apply Optimal Feature Selection to identify the globally-optimal subset of features and affirm that there is significant evidence of racial bias in the strike decisions. We also use Optimal Classification Trees to segment the juror population subgroups with similar characteristics and probability of being struck, and find that three of these subgroups exhibit significant racial disparity in strike rate, pinpointing specific areas of bias in the dataset.
Current computer graphics research practices contain racial biases that have resulted in investigations into skin and hair that focus on the hegemonic visual features of Europeans and East Asians. To broaden our research horizons to encompass all of humanity, we propose a variety of improvements to quantitative measures and qualitative practices, and pose novel, open research problems.
Community detection is a key task to further understand the function and the structure of complex networks. Therefore, a strategy used to assess this task must be able to avoid biased and incorrect results that might invalidate further analyses or ap plications that rely on such communities. Two widely used strategies to assess this task are generally known as structural and functional. The structural strategy basically consists in detecting and assessing such communities by using multiple methods and structural metrics. On the other hand, the functional strategy might be used when ground truth data are available to assess the detected communities. However, the evaluation of communities based on such strategies is usually done in experimental configurations that are largely susceptible to biases, a situation that is inherent to algorithms, metrics and network data used in this task. Furthermore, such strategies are not systematically combined in a way that allows for the identification and mitigation of bias in the algorithms, metrics or network data to converge into more consistent results. In this context, the main contribution of this article is an approach that supports a robust quality evaluation when detecting communities in real-world networks. In our approach, we measure the quality of a community by applying the structural and functional strategies, and the combination of both, to obtain different pieces of evidence. Then, we consider the divergences and the consensus among the pieces of evidence to identify and overcome possible sources of bias in community detection algorithms, evaluation metrics, and network data. Experiments conducted with several real and synthetic networks provided results that show the effectiveness of our approach to obtain more consistent conclusions about the quality of the detected communities.
The field of machine ethics is concerned with the question of how to embed ethical behaviors, or a means to determine ethical behaviors, into artificial intelligence (AI) systems. The goal is to produce artificial moral agents (AMAs) that are either implicitly ethical (designed to avoid unethical consequences) or explicitly ethical (designed to behave ethically). Van Wynsberghe and Robbins (2018) paper Critiquing the Reasons for Making Artificial Moral Agents critically addresses the reasons offered by machine ethicists for pursuing AMA research; this paper, co-authored by machine ethicists and commentators, aims to contribute to the machine ethics conversation by responding to that critique. The reasons for developing AMAs discussed in van Wynsberghe and Robbins (2018) are: it is inevitable that they will be developed; the prevention of harm; the necessity for public trust; the prevention of immoral use; such machines are better moral reasoners than humans, and building these machines would lead to a better understanding of human morality. In this paper, each co-author addresses those reasons in turn. In so doing, this paper demonstrates that the reasons critiqued are not shared by all co-authors; each machine ethicist has their own reasons for researching AMAs. But while we express a diverse range of views on each of the six reasons in van Wynsberghe and Robbins critique, we nevertheless share the opinion that the scientific study of AMAs has considerable value.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا