ﻻ يوجد ملخص باللغة العربية
There is emerging interest in performing regression between distributions. In contrast to prediction on single instances, these machine learning methods can be useful for population-based studies or on problems that are inherently statistical in nature. The recently proposed distribution regression network (DRN) has shown superior performance for the distribution-to-distribution regression task compared to conventional neural networks. However, in Kou et al. (2018) and some other works on distribution regression, there is a lack of comprehensive comparative study on both theoretical basis and generalization abilities of the methods. We derive some mathematical properties of DRN and qualitatively compare it to conventional neural networks. We also perform comprehensive experiments to study the generalizability of distribution regression models, by studying their robustness to limited training data, data sampling noise and task difficulty. DRN consistently outperforms conventional neural networks, requiring fewer training data and maintaining robust performance with noise. Furthermore, the theoretical properties of DRN can be used to provide some explanation on the ability of DRN to achieve better generalization performance than conventional neural networks.
Despite the superior performance of deep learning in many applications, challenges remain in the area of regression on function spaces. In particular, neural networks are unable to encode function inputs compactly as each node encodes just a real val
This work theoretically investigates the performance of a composite neural network. A composite neural network is a rooted directed acyclic graph combining a set of pre-trained and non-instantiated neural network models, where a pre-trained neural ne
Self-training algorithms, which train a model to fit pseudolabels predicted by another previously-learned model, have been very successful for learning with unlabeled data using neural networks. However, the current theoretical understanding of self-
To increase the trustworthiness of deep neural network (DNN) classifiers, an accurate prediction confidence that represents the true likelihood of correctness is crucial. Towards this end, many post-hoc calibration methods have been proposed to lever
For machine learning systems to be reliable, we must understand their performance in unseen, out-of-distribution environments. In this paper, we empirically show that out-of-distribution performance is strongly correlated with in-distribution perform