Explainable Machine Learning for Public Policy: Use Cases, Gaps, and Research Directions


Abstract in English

Explainability is a crucial requirement for effectiveness as well as the adoption of Machine Learning (ML) models supporting decisions in high-stakes public policy areas such as health, criminal justice, education, and employment, While the field of explainable has expanded in recent years, much of this work has not taken real-world needs into account. A majority of proposed methods use benchmark datasets with generic explainability goals without clear use-cases or intended end-users. As a result, the applicability and effectiveness of this large body of theoretical and methodological work on real-world applications is unclear. This paper focuses on filling this void for the domain of public policy. We develop a taxonomy of explainability use-cases within public policy problems; for each use-case, we define the end-users of explanations and the specific goals explainability has to fulfill; third, we map existing work to these use-cases, identify gaps, and propose research directions to fill those gaps in order to have a practical societal impact through ML.

Download