ﻻ يوجد ملخص باللغة العربية
Deep neural networks are widely used for nonlinear function approximation with applications ranging from computer vision to control. Although these networks involve the composition of simple arithmetic operations, it can be very challenging to verify whether a particular network satisfies certain input-output properties. This article surveys methods that have emerged recently for soundly verifying such properties. These methods borrow insights from reachability analysis, optimization, and search. We discuss fundamental differences and connections between existing algorithms. In addition, we provide pedagogical implementations of existing methods and compare them on a set of benchmark problems.
Despite the functional success of deep neural networks (DNNs), their trustworthiness remains a crucial open challenge. To address this challenge, both testing and verification techniques have been proposed. But these existing techniques provide eithe
Parameters in deep neural networks which are trained on large-scale databases can generalize across multiple domains, which is referred as transferability. Unfortunately, the transferability is usually defined as discrete states and it differs with d
With ever-increasing computational demand for deep learning, it is critical to investigate the implications of the numeric representation and precision of DNN model weights and activations on computational efficiency. In this work, we explore unconve
Conventional research attributes the improvements of generalization ability of deep neural networks either to powerful optimizers or the new network design. Different from them, in this paper, we aim to link the generalization ability of a deep netwo
A deep neural network model is a powerful framework for learning representations. Usually, it is used to learn the relation $x to y$ by exploiting the regularities in the input $x$. In structured output prediction problems, $y$ is multi-dimensional a