Do you want to publish a course? Click here

Manitest: Are classifiers really invariant?

201   0   0.0 ( 0 )
 Added by Alhussein Fawzi
 Publication date 2015
and research's language is English




Ask ChatGPT about the research

Invariance to geometric transformations is a highly desirable property of automatic classifiers in many image recognition tasks. Nevertheless, it is unclear to which extent state-of-the-art classifiers are invariant to basic transformations such as rotations and translations. This is mainly due to the lack of general methods that properly measure such an invariance. In this paper, we propose a rigorous and systematic approach for quantifying the invariance to geometric transformations of any classifier. Our key idea is to cast the problem of assessing a classifiers invariance as the computation of geodesics along the manifold of transformed images. We propose the Manitest method, built on the efficient Fast Marching algorithm to compute the invariance of classifiers. Our new method quantifies in particular the importance of data augmentation for learning invariance from data, and the increased invariance of convolutional neural networks with depth. We foresee that the proposed generic tool for measuring invariance to a large class of geometric transformations and arbitrary classifiers will have many applications for evaluating and comparing classifiers based on their invariance, and help improving the invariance of existing classifiers.



rate research

Read More

We build new test sets for the CIFAR-10 and ImageNet datasets. Both benchmarks have been the focus of intense research for almost a decade, raising the danger of overfitting to excessively re-used test sets. By closely following the original dataset creation processes, we test to what extent current classification models generalize to new data. We evaluate a broad range of models and find accuracy drops of 3% - 15% on CIFAR-10 and 11% - 14% on ImageNet. However, accuracy gains on the original test sets translate to larger gains on the new test sets. Our results suggest that the accuracy drops are not caused by adaptivity, but by the models inability to generalize to slightly harder images than those found in the original test sets.
Our field has recently witnessed an arms race of neural network-based trajectory predictors. While these predictors are at the core of many applications such as autonomous navigation or pedestrian flow simulations, their adversarial robustness has not been carefully studied. In this paper, we introduce a socially-attended attack to assess the social understanding of prediction models in terms of collision avoidance. An attack is a small yet carefully-crafted perturbations to fail predictors. Technically, we define collision as a failure mode of the output, and propose hard- and soft-attention mechanisms to guide our attack. Thanks to our attack, we shed light on the limitations of the current models in terms of their social understanding. We demonstrate the strengths of our method on the recent trajectory prediction models. Finally, we show that our attack can be employed to increase the social understanding of state-of-the-art models. The code is available online: https://s-attack.github.io/
We propose a method to estimate the uncertainty of the outcome of an image classifier on a given input datum. Deep neural networks commonly used for image classification are deterministic maps from an input image to an output class. As such, their outcome on a given datum involves no uncertainty, so we must specify what variability we are referring to when defining, measuring and interpreting confidence. To this end, we introduce the Wellington Posterior, which is the distribution of outcomes that would have been obtained in response to data that could have been generated by the same scene that produced the given image. Since there are infinitely many scenes that could have generated the given image, the Wellington Posterior requires induction from scenes other than the one portrayed. We explore alternate methods using data augmentation, ensembling, and model linearization. Additional alternatives include generative adversarial networks, conditional prior networks, and supervised single-view reconstruction. We test these alternatives against the empirical posterior obtained by inferring the class of temporally adjacent frames in a video. These developments are only a small step towards assessing the reliability of deep network classifiers in a manner that is compatible with safety-critical applications.
Triplet loss is an extremely common approach to distance metric learning. Representations of images from the same class are optimized to be mapped closer together in an embedding space than representations of images from different classes. Much work on triplet losses focuses on selecting the most useful triplets of images to consider, with strategies that select dissimilar examples from the same class or similar examples from different classes. The consensus of previous research is that optimizing with the textit{hardest} negative examples leads to bad training behavior. Thats a problem -- these hardest negatives are literally the cases where the distance metric fails to capture semantic similarity. In this paper, we characterize the space of triplets and derive why hard negatives make triplet loss training fail. We offer a simple fix to the loss function and show that, with this fix, optimizing with hard negative examples becomes feasible. This leads to more generalizable features, and image retrieval results that outperform state of the art for datasets with high intra-class variance.
384 - S. Feng , H. Beuther , Q. Zhang 2016
The dense, cold regions where high-mass stars form are poorly characterised, yet they represent an ideal opportunity to learn more about the initial conditions of high-mass star formation (HMSF), since high-mass starless cores (HMSCs) lack the violent feedback seen at later evolutionary stages. We present continuum maps obtained from the Submillimeter Array (SMA) interferometry at 1.1 mm for four infrared dark clouds (IRDCs, G28.34S, IRDC 18530, IRDC 18306, and IRDC 18308). We also present 1 mm/3 mm line surveys using IRAM 30 m single-dish observations. Our results are: (1) At a spatial resolution of 10^4 AU, the 1.1 mm SMA observations resolve each source into several fragments. The mass of each fragment is on average >10 Msun, which exceeds the predicted thermal Jeans mass of the whole clump by a factor of up to 30, indicating that thermal pressure does not dominate the fragmentation process. Our measured velocity dispersions in the 30 m lines imply that non-thermal motions provides the extra support against gravity in the fragments. (2) Both non-detection of high-J transitions and the hyperfine multiplet fit of N2H+(1-0), C2H(1-0), HCN(1-0), and H13CN(1-0) indicate that our sources are cold and young. However, obvious detection of SiO and the asymmetric line profile of HCO+(1-0) in G28.34S indicate a potential protostellar object and probable infall motion. (3) With a large number of N-bearing species, the existence of carbon rings and molecular ions, and the anti-correlated spatial distributions between N2H+/NH2D and CO, our large-scale high-mass clumps exhibit similar chemical features as small-scale low-mass prestellar objects. This study of a small sample of IRDCs illustrates that thermal Jeans instability alone cannot explain the fragmentation of the clump into cold (~15 K), dense (>10^5 cm-3) cores and that these IRDCs are not completely quiescent.

suggested questions

comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا