ترغب بنشر مسار تعليمي؟ اضغط هنا

Training Quantized Neural Networks to Global Optimality via Semidefinite Programming

58   0   0.0 ( 0 )
 نشر من قبل Burak Bartan
 تاريخ النشر 2021
والبحث باللغة English




اسأل ChatGPT حول البحث

Neural networks (NNs) have been extremely successful across many tasks in machine learning. Quantization of NN weights has become an important topic due to its impact on their energy efficiency, inference time and deployment on hardware. Although post-training quantization is well-studied, training optimal quantized NNs involves combinatorial non-convex optimization problems which appear intractable. In this work, we introduce a convex optimization strategy to train quantized NNs with polynomial activations. Our method leverages hidden convexity in two-layer neural networks from the recent literature, semidefinite lifting, and Grothendiecks identity. Surprisingly, we show that certain quantized NN problems can be solved to global optimality in polynomial-time in all relevant parameters via semidefinite relaxations. We present numerical examples to illustrate the effectiveness of our method.



قيم البحث

اقرأ أيضاً

We show a new way to round vector solutions of semidefinite programming (SDP) hierarchies into integral solutions, based on a connection between these hierarchies and the spectrum of the input graph. We demonstrate the utility of our method by provid ing a new SDP-hierarchy based algorithm for constraint satisfaction problems with 2-variable constraints (2-CSPs). More concretely, we show for every 2-CSP instance I a rounding algorithm for r rounds of the Lasserre SDP hierarchy for I that obtains an integral solution that is at most eps worse than the relaxations value (normalized to lie in [0,1]), as long as r > kcdotrank_{geq theta}(Ins)/poly(e) ;, where k is the alphabet size of I, $theta=poly(e/k)$, and $rank_{geq theta}(Ins)$ denotes the number of eigenvalues larger than $theta$ in the normalized adjacency matrix of the constraint graph of $Ins$. In the case that $Ins$ is a uniquegames instance, the threshold $theta$ is only a polynomial in $e$, and is independent of the alphabet size. Also in this case, we can give a non-trivial bound on the number of rounds for emph{every} instance. In particular our result yields an SDP-hierarchy based algorithm that matches the performance of the recent subexponential algorithm of Arora, Barak and Steurer (FOCS 2010) in the worst case, but runs faster on a natural family of instances, thus further restricting the set of possible hard instances for Khots Unique Games Conjecture. Our algorithm actually requires less than the $n^{O(r)}$ constraints specified by the $r^{th}$ level of the Lasserre hierarchy, and in some cases $r$ rounds of our program can be evaluated in time $2^{O(r)}poly(n)$.
Convex relaxations have emerged as a promising approach for verifying desirable properties of neural networks like robustness to adversarial perturbations. Widely used Linear Programming (LP) relaxations only work well when networks are trained to fa cilitate verification. This precludes applications that involve verification-agnostic networks, i.e., networks not specially trained for verification. On the other hand, semidefinite programming (SDP) relaxations have successfully be applied to verification-agnostic networks, but do not currently scale beyond small networks due to poor time and space asymptotics. In this work, we propose a first-order dual SDP algorithm that (1) requires memory only linear in the total number of network activations, (2) only requires a fixed number of forward/backward passes through the network per iteration. By exploiting iterative eigenvector methods, we express all solver operations in terms of forward and backward passes through the network, enabling efficient use of hardware like GPUs/TPUs. For two verification-agnostic networks on MNIST and CIFAR-10, we significantly improve L-inf verified robust accuracy from 1% to 88% and 6% to 40% respectively. We also demonstrate tight verification of a quadratic stability specification for the decoder of a variational autoencoder.
Semidefinite programming is an important optimization task, often used in time-sensitive applications. Though they are solvable in polynomial time, in practice they can be too slow to be used in online, i.e. real-time applications. Here we propose to solve feasibility semidefinite programs using artificial neural networks. Given the optimization constraints as an input, a neural network outputs values for the optimization parameters such that the constraints are satisfied, both for the primal and the dual formulations of the task. We train the network without having to exactly solve the semidefinite program even once, thus avoiding the possibly time-consuming task of having to generate many training samples with conventional solvers. The neural network method is only inconclusive if both the primal and dual models fail to provide feasible solutions. Otherwise we always obtain a certificate, which guarantees false positives to be excluded. We examine the performance of the method on a hierarchy of quantum information tasks, the Navascues-Pironio-Acin hierarchy applied to the Bell scenario. We demonstrate that the trained neural network gives decent accuracy, while showing orders of magnitude increase in speed compared to a traditional solver.
Lately, post-training quantization methods have gained considerable attention, as they are simple to use, and require only a small unlabeled calibration set. This small dataset cannot be used to fine-tune the model without significant over-fitting. I nstead, these methods only use the calibration set to set the activations dynamic ranges. However, such methods always resulted in significant accuracy degradation, when used below 8-bits (except on small datasets). Here we aim to break the 8-bit barrier. To this end, we minimize the quantization errors of each layer separately by optimizing its parameters over the calibration set. We empirically demonstrate that this approach is: (1) much less susceptible to over-fitting than the standard fine-tuning approaches, and can be used even on a very small calibration set; and (2) more powerful than previous methods, which only set the activations dynamic ranges. Furthermore, we demonstrate how to optimally allocate the bit-widths for each layer, while constraining accuracy degradation or model compression by proposing a novel integer programming formulation. Finally, we suggest model global statistics tuning, to correct biases introduced during quantization. Together, these methods yield state-of-the-art results for both vision and text models. For instance, on ResNet50, we obtain less than 1% accuracy degradation --- with 4-bit weights and activations in all layers, but the smallest two. We open-sourced our code.
Generalisation of a deep neural network (DNN) is one major concern when employing the deep learning approach for solving practical problems. In this paper we propose a new technique, named approximated orthonormal normalisation (AON), to improve the generalisation capacity of a DNN model. Considering a weight matrix W from a particular neural layer in the model, our objective is to design a function h(W) such that its row vectors are approximately orthogonal to each other while allowing the DNN model to fit the training data sufficiently accurate. By doing so, it would avoid co-adaptation among neurons of the same layer to be able to improve network-generalisation capacity. Specifically, at each iteration, we first approximate (WW^T)^(-1/2) using its Taylor expansion before multiplying the matrix W. After that, the matrix product is then normalised by applying the spectral normalisation (SN) technique to obtain h(W). Conceptually speaking, AON is designed to turn orthonormal regularisation into orthonormal normalisation to avoid manual balancing the original and penalty functions. Experimental results show that AON yields promising validation performance compared to orthonormal regularisation.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا