An interesting development in automatic visual recognition has been the emergence of tasks where it is not possible to assign objective labels to images, yet still feasible to collect annotations that reflect human judgements about them. Machine learning-based predictors for these tasks rely on supervised training that models the behavior of the annotators, i.e., what would the average persons judgement be for an image? A key open question for this type of work, especially for applications where inconsistency with human behavior can lead to ethical lapses, is how to evaluate the epistemic uncertainty of trained predictors, i.e., the uncertainty that comes from the predictors model. We propose a Bayesian framework for evaluating black box predictors in this regime, agnostic to the predictors internal structure. The framework specifies how to estimate the epistemic uncertainty that comes from the predictor with respect to human labels by approximating a conditional distribution and producing a credible interval for the predictions and their measures of performance. The framework is successfully applied to four image classification tasks that use subjective human judgements: facial beauty assessment, social attribute assignment, apparent age estimation, and ambiguous scene labeling.