Artificial intelligence is applied in a range of sectors, and is relied upon for decisions requiring a high level of trust. For regression methods, trust is increased if they approximate the true input-output relationships and perform accurately outside the bounds of the training data. But often performance off-test-set is poor, especially when data is sparse. This is because the conditional average, which in many scenarios is a good approximation of the `ground truth, is only modelled with conventional Minkowski-r error measures when the data set adheres to restrictive assumptions, with many real data sets violating these. To combat this there are several methods that use prior knowledge to approximate the `ground truth. However, prior knowledge is not always available, and this paper investigates how error measures affect the ability for a regression method to model the `ground truth in these scenarios. Current error measures are shown to create an unhelpful bias and a new error measure is derived which does not exhibit this behaviour. This is tested on 36 representative data sets with different characteristics, showing that it is more consistent in determining the `ground truth and in giving improved predictions in regions beyond the range of the training data.