Detecting Operational Adversarial Examples for Reliable Deep Learning


الملخص بالإنكليزية

The utilisation of Deep Learning (DL) raises new challenges regarding its dependability in critical applications. Sound verification and validation methods are needed to assure the safe and reliable use of DL. However, state-of-the-art debug testing methods on DL that aim at detecting adversarial examples (AEs) ignore the operational profile, which statistically depicts the softwares future operational use. This may lead to very modest effectiveness on improving the softwares delivered reliability, as the testing budget is likely to be wasted on detecting AEs that are unrealistic or encountered very rarely in real-life operation. In this paper, we first present the novel notion of operational AEs which are AEs that have relatively high chance to be seen in future operation. Then an initial design of a new DL testing method to efficiently detect operational AEs is provided, as well as some insights on our prospective research plan.

تحميل البحث