ﻻ يوجد ملخص باللغة العربية
Computer mediated conversations (e.g., videoconferencing) is now the new mainstream media. How would credibility be impacted if one could change their race on the fly in these environments? We propose an approach using Deepfakes and a supporting GAN architecture to isolate visual features and alter racial perception. We then crowd-sourced over 800 survey responses to measure how credibility was influenced by changing the perceived race. We evaluate the effect of showing a still image of a Black person versus a still image of a White person using the same audio clip for each survey. We also test the effect of showing either an original video or an altered video where the appearance of the person in the original video is modified to appear more White. We measure credibility as the percent of participant responses who believed the speaker was telling the truth. We found that changing the race of a person in a static image has negligible impact on credibility. However, the same manipulation of race on a video increases credibility significantly (61% to 73% with p $<$ 0.05). Furthermore, a VADER sentiment analysis over the free response survey questions reveals that more positive sentiment is used to justify the credibility of a White individual in a video.
Increasing accessibility of data to researchers makes it possible to conduct massive amounts of statistical testing. Rather than follow a carefully crafted set of scientific hypotheses with statistical analysis, researchers can now test many possible
At this moment, GAN-based image generation methods are still imperfect, whose upsampling design has limitations in leaving some certain artifact patterns in the synthesized image. Such artifact patterns can be easily exploited (by recent methods) for
Gender bias, a systemic and unfair difference in how men and women are treated in a given domain, is widely studied across different academic fields. Yet, there are barely any studies of the phenomenon in the field of academic information systems (IS
From massive face-recognition-based surveillance and machine-learning-based decision systems predicting crime recidivism rates, to the move towards automated health diagnostic systems, artificial intelligence (AI) is being used in scenarios that have
We conduct a study of hiring bias on a simulation platform where we ask Amazon MTurk participants to make hiring decisions for a mathematically intensive task. Our findings suggest hiring biases against Black workers and less attractive workers and p