ترغب بنشر مسار تعليمي؟ اضغط هنا

Detecting Racial Bias in Jury Selection

91   0   0.0 ( 0 )
 نشر من قبل Jack Dunn
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

To support the 2019 U.S. Supreme Court case Flowers v. Mississippi, APM Reports collated historical court records to assess whether the State exhibited a racial bias in striking potential jurors. This analysis used backward stepwise logistic regression to conclude that race was a significant factor, however this method for selecting relevant features is only a heuristic, and additionally cannot consider interactions between features. We apply Optimal Feature Selection to identify the globally-optimal subset of features and affirm that there is significant evidence of racial bias in the strike decisions. We also use Optimal Classification Trees to segment the juror population subgroups with similar characteristics and probability of being struck, and find that three of these subgroups exhibit significant racial disparity in strike rate, pinpointing specific areas of bias in the dataset.



قيم البحث

اقرأ أيضاً

378 - Wei Du , Xintao Wu 2021
The underlying assumption of many machine learning algorithms is that the training data and test data are drawn from the same distributions. However, the assumption is often violated in real world due to the sample selection bias between the training and test data. Previous research works focus on reweighing biased training data to match the test data and then building classification models on the reweighed training data. However, how to achieve fairness in the built classification models is under-explored. In this paper, we propose a framework for robust and fair learning under sample selection bias. Our framework adopts the reweighing estimation approach for bias correction and the minimax robust estimation approach for achieving robustness on prediction accuracy. Moreover, during the minimax optimization, the fairness is achieved under the worst case, which guarantees the models fairness on test data. We further develop two algorithms to handle sample selection bias when test data is both available and unavailable. We conduct experiments on two real-world datasets and the experimental results demonstrate its effectiveness in terms of both utility and fairness metrics.
In current hate speech datasets, there exists a high correlation between annotators perceptions of toxicity and signals of African American English (AAE). This bias in annotated training data and the tendency of machine learning models to amplify it cause AAE text to often be mislabeled as abusive/offensive/hate speech with a high false positive rate by current hate speech classifiers. In this paper, we use adversarial training to mitigate this bias, introducing a hate speech classifier that learns to detect toxic sentences while demoting confounds corresponding to AAE texts. Experimental results on a hate speech dataset and an AAE dataset suggest that our method is able to substantially reduce the false positive rate for AAE text while only minimally affecting the performance of hate speech classification.
Current computer graphics research practices contain racial biases that have resulted in investigations into skin and hair that focus on the hegemonic visual features of Europeans and East Asians. To broaden our research horizons to encompass all of humanity, we propose a variety of improvements to quantitative measures and qualitative practices, and pose novel, open research problems.
76 - Zo Ahmed , Bertie Vidgen , 2021
Online hate is a growing concern on many social media platforms and other sites. To combat it, technology companies are increasingly identifying and sanctioning `hateful users rather than simply moderating hateful content. Yet, most research in onlin e hate detection to date has focused on hateful content. This paper examines how fairer and more accurate hateful user detection systems can be developed by incorporating social network information through geometric deep learning. Geometric deep learning dynamically learns information-rich network representations and can generalise to unseen nodes. This is essential for moving beyond manually engineered network features, which lack scalability and produce information-sparse network representations. This paper compares the accuracy of geometric deep learning with other techniques which either exclude network information or incorporate it through manual feature engineering (e.g., node2vec). It also evaluates the fairness of these techniques using the `predictive equality criteria, comparing the false positive rates on a subset of 136 African-American users with 4836 other users. Geometric deep learning produces the most accurate and fairest classifier, with an AUC score of 90.8% on the entire dataset and a false positive rate of zero among the African-American subset for the best performing model. This highlights the benefits of more effectively incorporating social network features in automated hateful user detection. Such an approach is also easily operationalized for real-world content moderation as it has an efficient and scalable design.
Despite the evolution of norms and regulations to mitigate the harm from biases, harmful discrimination linked to an individuals unconscious biases persists. Our goal is to better understand and detect the physiological and behavioral indicators of i mplicit biases. This paper investigates whether we can reliably detect racial bias from physiological responses, including heart rate, conductive skin response, skin temperature, and micro-body movements. We analyzed data from 46 subjects whose physiological data was collected with Empatica E4 wristband while taking an Implicit Association Test (IAT). Our machine learning and statistical analysis show that implicit bias can be predicted from physiological signals with 76.1% accuracy. Our results also show that the EDA signal associated with skin response has the strongest correlation with racial bias and that there are significant differences between the values of EDA features for biased and unbiased participants.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا