Do you want to publish a course? Click here

Could you become more credible by being White? Assessing Impact of Race on Credibility with Deepfakes

353   0   0.0 ( 0 )
 Added by Kurtis Haut
 Publication date 2021
and research's language is English




Ask ChatGPT about the research

Computer mediated conversations (e.g., videoconferencing) is now the new mainstream media. How would credibility be impacted if one could change their race on the fly in these environments? We propose an approach using Deepfakes and a supporting GAN architecture to isolate visual features and alter racial perception. We then crowd-sourced over 800 survey responses to measure how credibility was influenced by changing the perceived race. We evaluate the effect of showing a still image of a Black person versus a still image of a White person using the same audio clip for each survey. We also test the effect of showing either an original video or an altered video where the appearance of the person in the original video is modified to appear more White. We measure credibility as the percent of participant responses who believed the speaker was telling the truth. We found that changing the race of a person in a static image has negligible impact on credibility. However, the same manipulation of race on a video increases credibility significantly (61% to 73% with p $<$ 0.05). Furthermore, a VADER sentiment analysis over the free response survey questions reveals that more positive sentiment is used to justify the credibility of a White individual in a video.

rate research

Read More

Increasing accessibility of data to researchers makes it possible to conduct massive amounts of statistical testing. Rather than follow a carefully crafted set of scientific hypotheses with statistical analysis, researchers can now test many possible relations and let P-values or other statistical summaries generate hypotheses for them. Genetic epidemiology field is an illustrative case in this paradigm shift. Driven by technological advances, testing a handful of genetic variants in relation to a health outcome has been abandoned in favor of agnostic screening of the entire genome, followed by selection of top hits, e.g., by selection of genetic variants with the smallest association P-values. At the same time, nearly total lack of replication of claimed associations that has been shaming the field turned to a flow of reports whose findings have been robustly replicating. Researchers may have adopted better statistical practices by learning from past failures, but we suggest that a steep increase in the amount of statistical testing itself is an important factor. Regardless of whether statistical significance has been reached, an increased number of tested hypotheses leads to enrichment of smallest P-values with genuine associations. In this study, we quantify how the expected proportion of genuine signals (EPGS) among top hits changes with an increasing number of tests. When the rate of occurrence of genuine signals does not decrease too sharply to zero as more tests are performed, the smallest P-values are increasingly more likely to represent genuine associations in studies with more tests.
At this moment, GAN-based image generation methods are still imperfect, whose upsampling design has limitations in leaving some certain artifact patterns in the synthesized image. Such artifact patterns can be easily exploited (by recent methods) for difference detection of real and GAN-synthesized images. However, the existing detection methods put much emphasis on the artifact patterns, which can become futile if such artifact patterns were reduced. Towards reducing the artifacts in the synthesized images, in this paper, we devise a simple yet powerful approach termed FakePolisher that performs shallow reconstruction of fake images through a learned linear dictionary, intending to effectively and efficiently reduce the artifacts introduced during image synthesis. The comprehensive evaluation on 3 state-of-the-art DeepFake detection methods and fake images generated by 16 popular GAN-based fake image generation techniques, demonstrates the effectiveness of our technique.Overall, through reducing artifact patterns, our technique significantly reduces the accuracy of the 3 state-of-the-art fake image detection methods, i.e., 47% on average and up to 93% in the worst case.
Gender bias, a systemic and unfair difference in how men and women are treated in a given domain, is widely studied across different academic fields. Yet, there are barely any studies of the phenomenon in the field of academic information systems (IS), which is surprising especially in the light of the proliferation of such studies in the Science, Technology, Mathematics and Technology (STEM) disciplines. To assess potential gender bias in the IS field, this paper outlines a study to estimate the impact of scholarly citations that female IS academics accumulate vis-`a-vis their male colleagues. Drawing on a scientometric study of the 7,260 papers published in the most prestigious IS journals (known as the AIS Basket of Eight), our analysis aims to unveil potential bias in the accumulation of citations between genders in the field. We use panel regression to estimate the gendered citations accumulation in the field. By doing so we propose to contribute knowledge on a core dimension of gender bias in academia, which is, so far, almost completely unexplored in the IS field.
248 - Timnit Gebru 2019
From massive face-recognition-based surveillance and machine-learning-based decision systems predicting crime recidivism rates, to the move towards automated health diagnostic systems, artificial intelligence (AI) is being used in scenarios that have serious consequences in peoples lives. However, this rapid permeation of AI into society has not been accompanied by a thorough investigation of the sociopolitical issues that cause certain groups of people to be harmed rather than advantaged by it. For instance, recent studies have shown that commercial face recognition systems have much higher error rates for dark skinned women while having minimal errors on light skinned men. A 2016 ProPublica investigation uncovered that machine learning based tools that assess crime recidivism rates in the US are biased against African Americans. Other studies show that natural language processing tools trained on newspapers exhibit societal biases (e.g. finishing the analogy Man is to computer programmer as woman is to X by homemaker). At the same time, books such as Weapons of Math Destruction and Automated Inequality detail how people in lower socioeconomic classes in the US are subjected to more automated decision making tools than those who are in the upper class. Thus, these tools are most often used on people towards whom they exhibit the most bias. While many technical solutions have been proposed to alleviate bias in machine learning systems, we have to take a holistic and multifaceted approach. This includes standardization bodies determining what types of systems can be used in which scenarios, making sure that automated decision tools are created by people from diverse backgrounds, and understanding the historical and political factors that disadvantage certain groups who are subjected to these tools.
We conduct a study of hiring bias on a simulation platform where we ask Amazon MTurk participants to make hiring decisions for a mathematically intensive task. Our findings suggest hiring biases against Black workers and less attractive workers and preferences towards Asian workers female workers and more attractive workers. We also show that certain UI designs including provision of candidates information at the individual level and reducing the number of choices can significantly reduce discrimination. However provision of candidates information at the subgroup level can increase discrimination. The results have practical implications for designing better online freelance marketplaces.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا