Denoising Noisy Neural Networks: A Bayesian Approach with Compensation


الملخص بالإنكليزية

Noisy neural networks (NoisyNNs) refer to the inference and training of NNs in the presence of noise. Noise is inherent in most communication and storage systems; hence, NoisyNNs emerge in many new applications, including federated edge learning, where wireless devices collaboratively train a NN over a noisy wireless channel, or when NNs are implemented/stored in an analog storage medium. This paper studies a fundamental problem of NoisyNNs: how to estimate the uncontaminated NN weights from their noisy observations or manifestations. Whereas all prior works relied on the maximum likelihood (ML) estimation to maximize the likelihood function of the estimated NN weights, this paper demonstrates that the ML estimator is in general suboptimal. To overcome the suboptimality of the conventional ML estimator, we put forth an $text{MMSE}_{pb}$ estimator to minimize a compensated mean squared error (MSE) with a population compensator and a bias compensator. Our approach works well for NoisyNNs arising in both 1) noisy inference, where noise is introduced only in the inference phase on the already-trained NN weights; and 2) noisy training, where noise is introduced over the course of training. Extensive experiments on the CIFAR-10 and SST-2 datasets with different NN architectures verify the significant performance gains of the $text{MMSE}_{pb}$ estimator over the ML estimator when used to denoise the NoisyNN. For noisy inference, the average gains are up to $156%$ for a noisy ResNet34 model and $14.7%$ for a noisy BERT model; for noisy training, the average gains are up to $18.1$ dB for a noisy ResNet18 model.

تحميل البحث