Do you want to publish a course? Click here

Robustness via curvature regularization, and vice versa

94   0   0.0 ( 0 )
 Publication date 2018
and research's language is English




Ask ChatGPT about the research

State-of-the-art classifiers have been shown to be largely vulnerable to adversarial perturbations. One of the most effective strategies to improve robustness is adversarial training. In this paper, we investigate the effect of adversarial training on the geometry of the classification landscape and decision boundaries. We show in particular that adversarial training leads to a significant decrease in the curvature of the loss surface with respect to inputs, leading to a drastically more linear behaviour of the network. Using a locally quadratic approximation, we provide theoretical evidence on the existence of a strong relation between large robustness and small curvature. To further show the importance of reduced curvature for improving the robustness, we propose a new regularizer that directly minimizes curvature of the loss surface, and leads to adversarial robustness that is on par with adversarial training. Besides being a more efficient and principled alternative to adversarial training, the proposed regularizer confirms our claims on the importance of exhibiting quasi-linear behavior in the vicinity of data points in order to achieve robustness.



rate research

Read More

Adversarial robustness has become a topic of growing interest in machine learning since it was observed that neural networks tend to be brittle. We propose an information-geometric formulation of adversarial defense and introduce FIRE, a new Fisher-Rao regularization for the categorical cross-entropy loss, which is based on the geodesic distance between natural and perturbed input features. Based on the information-geometric properties of the class of softmax distributions, we derive an explicit characterization of the Fisher-Rao Distance (FRD) for the binary and multiclass cases, and draw some interesting properties as well as connections with standard regularization metrics. Furthermore, for a simple linear and Gaussian model, we show that all Pareto-optimal points in the accuracy-robustness region can be reached by FIRE while other state-of-the-art methods fail. Empirically, we evaluate the performance of various classifiers trained with the proposed loss on standard datasets, showing up to 2% of improvements in terms of robustness while reducing the training time by 20% over the best-performing methods.
While great progress has been made at making neural networks effective across a wide range of visual tasks, most models are surprisingly vulnerable. This frailness takes the form of small, carefully chosen perturbations of their input, known as adversarial examples, which represent a security threat for learned vision models in the wild -- a threat which should be responsibly defended against in safety-critical applications of computer vision. In this paper, we advocate for and experimentally investigate the use of a family of logit regularization techniques as an adversarial defense, which can be used in conjunction with other methods for creating adversarial robustness at little to no marginal cost. We also demonstrate that much of the effectiveness of one recent adversarial defense mechanism can in fact be attributed to logit regularization, and show how to improve its defense against both white-box and black-box attacks, in the process creating a stronger black-box attack against PGD-based models. We validate our methods on three datasets and include results on both gradient-free attacks and strong gradient-based iterative attacks with as many as 1,000 steps.
63 - L. Decin 2000
We present a detailed spectroscopic study of a sample of bright, mostly cool, stars observed with the Short-Wavelength Spectrometer (SWS) on board the Infrared Space Observatory (ISO), which enables the accurate determination of the stellar parameters of the cool giants, but also serves as a critical review of the ISO-SWS calibration.
Learning with noisy labels is an important and challenging task for training accurate deep neural networks. Some commonly-used loss functions, such as Cross Entropy (CE), suffer from severe overfitting to noisy labels. Robust loss functions that satisfy the symmetric condition were tailored to remedy this problem, which however encounter the underfitting effect. In this paper, we theoretically prove that textbf{any loss can be made robust to noisy labels} by restricting the network output to the set of permutations over a fixed vector. When the fixed vector is one-hot, we only need to constrain the output to be one-hot, which however produces zero gradients almost everywhere and thus makes gradient-based optimization difficult. In this work, we introduce the sparse regularization strategy to approximate the one-hot constraint, which is composed of network output sharpening operation that enforces the output distribution of a network to be sharp and the $ell_p$-norm ($ple 1$) regularization that promotes the network output to be sparse. This simple approach guarantees the robustness of arbitrary loss functions while not hindering the fitting ability. Experimental results demonstrate that our method can significantly improve the performance of commonly-used loss functions in the presence of noisy labels and class imbalance, and outperform the state-of-the-art methods. The code is available at https://github.com/hitcszx/lnl_sr.
We consider finite temperature SU(2) gauge theory in the continuum formulation, which necessitates the choice of a gauge fixing. Choosing the Landau gauge, the existing gauge copies are taken into account by means of the Gribov-Zwanziger (GZ) quantization scheme, which entails the introduction of a dynamical mass scale (Gribov mass) directly influencing the Green functions of the theory. Here, we determine simultaneously the Polyakov loop (vacuum expectation value) and Gribov mass in terms of temperature, by minimizing the vacuum energy w.r.t. the Polyakov loop parameter and solving the Gribov gap equation. Inspired by the Casimir energy-style of computation, we illustrate the usage of Zeta function regularization in finite temperature calculations. Our main result is that the Gribov mass directly feels the deconfinement transition, visible from a cusp occurring at the same temperature where the Polyakov loop becomes nonzero. In this exploratory work we mainly restrict ourselves to the original Gribov-Zwanziger quantization procedure in order to illustrate the approach and the potential direct link between the vacuum structure of the theory (dynamical mass scales) and (de)confinement. We also present a first look at the critical temperature obtained from the Refined Gribov-Zwanziger approach. Finally, a particular problem for the pressure at low temperatures is reported.

suggested questions

comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا