Do you want to publish a course? Click here

Robustness Disparities in Commercial Face Detection

108   0   0.0 ( 0 )
 Added by Samuel Dooley
 Publication date 2021
and research's language is English




Ask ChatGPT about the research

Facial detection and analysis systems have been deployed by large companies and critiqued by scholars and activists for the past decade. Critiques that focus on system performance analyze disparity of the systems output, i.e., how frequently is a face detected for different Fitzpatrick skin types or perceived genders. However, we focus on the robustness of these system outputs under noisy natural perturbations. We present the first of its kind detailed benchmark of the robustness of three such systems: Amazon Rekognition, Microsoft Azure, and Google Cloud Platform. We use both standard and recently released academic facial datasets to quantitatively analyze trends in robustness for each. Across all the datasets and systems, we generally find that photos of individuals who are older, masculine presenting, of darker skin type, or have dim lighting are more susceptible to errors than their counterparts in other identities.

rate research

Read More

Driven by massive amounts of data and important advances in computational resources, new deep learning systems have achieved outstanding results in a large spectrum of applications. Nevertheless, our current theoretical understanding on the mathematical foundations of deep learning lags far behind its empirical success. Towards solving the vulnerability of neural networks, however, the field of adversarial robustness has recently become one of the main sources of explanations of our deep models. In this article, we provide an in-depth review of the field of adversarial robustness in deep learning, and give a self-contained introduction to its main notions. But, in contrast to the mainstream pessimistic perspective of adversarial robustness, we focus on the main positive aspects that it entails. We highlight the intuitive connection between adversarial examples and the geometry of deep neural networks, and eventually explore how the geometric study of adversarial examples can serve as a powerful tool to understand deep learning. Furthermore, we demonstrate the broad applicability of adversarial robustness, providing an overview of the main emerging applications of adversarial robustness beyond security. The goal of this article is to provide readers with a set of new perspectives to understand deep learning, and to supply them with intuitive tools and insights on how to use adversarial robustness to improve it.
In response to the coronavirus disease 2019 (COVID-19) pandemic, governments have encouraged and ordered citizens to practice social distancing, particularly by working and studying at home. Intuitively, only a subset of people have the ability to practice remote work. However, there has been little research on the disparity of mobility adaptation across different income groups in US cities during the pandemic. The authors worked to fill this gap by quantifying the impacts of the pandemic on human mobility by income in Greater Houston, Texas. In this paper, we determined human mobility using pseudonymized, spatially disaggregated cell phone location data. A longitudinal study across estimated income groups was conducted by measuring the total travel distance, radius of gyration, number of visited locations, and per-trip distance in April 2020 compared to the data in a baseline. An apparent disparity in mobility was found across estimated income groups. In particular, there was a strong negative correlation ($rho$ = -0.90) between a travelers estimated income and travel distance in April. Disparities in mobility adaptability were further shown since those in higher income brackets experienced larger percentage drops in the radius of gyration and the number of distinct visited locations than did those in lower income brackets. The findings of this study suggest a need to understand the reasons behind the mobility inflexibility among low-income populations during the pandemic. The study illuminates an equity issue which may be of interest to policy makers and researchers alike in the wake of an epidemic.
The current study uses a network analysis approach to explore the STEM pathways that students take through their final year of high school in Aotearoa New Zealand. By accessing individual-level microdata from New Zealands Integrated Data Infrastructure, we are able to create a co-enrolment network comprised of all STEM assessment standards taken by students in New Zealand between 2010 and 2016. We explore the structure of this co-enrolment network though use of community detection and a novel measure of entropy. We then investigate how network structure differs across sub-populations based on students sex, ethnicity, and the socio-economic-status (SES) of the high school they attended. Results show the structure of the STEM co-enrolment network differs across these sub-populations, and also changes over time. We find that, while female students were more likely to have been enrolled in life science standards, they were less well represented in physics, calculus, and vocational (e.g., agriculture, practical technology) standards. Our results also show that the enrolment patterns of the Maori and Pacific Islands sub-populations had higher levels of entropy, an observation that may be explained by fewer enrolments in key science and mathematics standards. Through further investigation of this disparity, we find that ethnic group differences in entropy are moderated by high school SES, such that the difference in entropy between Maori and Pacific Islands students, and European and Asian students is even greater. We discuss these findings in the context of the New Zealand education system and policy changes that occurred between 2010 and 2016.
451 - Haoliang Li 2020
Deep neural networks (DNN) have shown great success in many computer vision applications. However, they are also known to be susceptible to backdoor attacks. When conducting backdoor attacks, most of the existing approaches assume that the targeted DNN is always available, and an attacker can always inject a specific pattern to the training data to further fine-tune the DNN model. However, in practice, such attack may not be feasible as the DNN model is encrypted and only available to the secure enclave. In this paper, we propose a novel black-box backdoor attack technique on face recognition systems, which can be conducted without the knowledge of the targeted DNN model. To be specific, we propose a backdoor attack with a novel color stripe pattern trigger, which can be generated by modulating LED in a specialized waveform. We also use an evolutionary computing strategy to optimize the waveform for backdoor attack. Our backdoor attack can be conducted in a very mild condition: 1) the adversary cannot manipulate the input in an unnatural way (e.g., injecting adversarial noise); 2) the adversary cannot access the training database; 3) the adversary has no knowledge of the training model as well as the training set used by the victim party. We show that the backdoor trigger can be quite effective, where the attack success rate can be up to $88%$ based on our simulation study and up to $40%$ based on our physical-domain study by considering the task of face recognition and verification based on at most three-time attempts during authentication. Finally, we evaluate several state-of-the-art potential defenses towards backdoor attacks, and find that our attack can still be effective. We highlight that our study revealed a new physical backdoor attack, which calls for the attention of the security issue of the existing face recognition/verification techniques.
Our goal is to understand why the robustness drops after conducting adversarial training for too long. Although this phenomenon is commonly explained as overfitting, our analysis suggest that its primary cause is perturbation underfitting. We observe that after training for too long, FGSM-generated perturbations deteriorate into random noise. Intuitively, since no parameter updates are made to strengthen the perturbation generator, once this process collapses, it could be trapped in such local optima. Also, sophisticating this process could mostly avoid the robustness drop, which supports that this phenomenon is caused by underfitting instead of overfitting. In the light of our analyses, we propose APART, an adaptive adversarial training framework, which parameterizes perturbation generation and progressively strengthens them. Shielding perturbations from underfitting unleashes the potential of our framework. In our experiments, APART provides comparable or even better robustness than PGD-10, with only about 1/4 of its computational cost.

suggested questions

comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا