No Arabic abstract
This paper presents new features recently implemented in the theorem prover Vampire, namely support for first-order logic with a first class boolean sort (FOOL) and polymorphic arrays. In addition to having a first class boolean sort, FOOL also contains if-then-else and let-in expressions. We argue that presented extensions facilitate reasoning-based program analysis, both by increasing the expressivity of first-order reasoners and by gains in efficiency.
Many applications of formal methods require automated reasoning about system properties, such as system safety and security. To improve the performance of automated reasoning engines, such as SAT/SMT solvers and first-order theorem prover, it is necessary to understand both the successful and failing attempts of these engines towards producing formal certificates, such as logical proofs and/or models. Such an analysis is challenging due to the large number of logical formulas generated during proof/model search. In this paper we focus on saturation-based first-order theorem proving and introduce the SATVIS tool for interactively visualizing saturation-based proof attempts in first-order theorem proving. We build SATVIS on top of the world-leading theorem prover VAMPIRE, by interactively visualizing the saturation attempts of VAMPIRE in SATVIS. Our work combines the automatic layout and visualization of the derivation graph induced by the saturation attempt with interactive transformations and search functionality. As a result, we are able to analyze and debug (failed) proof attempts of VAMPIRE. Thanks to its interactive visualisation, we believe SATVIS helps both experts and non-experts in theorem proving to understand first-order proofs and analyze/refine failing proof attempts of first-order provers.
Due to the widespread deployment of fingerprint/face/speaker recognition systems, attacking deep learning based biometric systems has drawn more and more attention. Previous research mainly studied the attack to the vision-based system, such as fingerprint and face recognition. While the attack for speaker recognition has not been investigated yet, although it has been widely used in our daily life. In this paper, we attempt to fool the state-of-the-art speaker recognition model and present textit{speaker recognition attacker}, a lightweight model to fool the deep speaker recognition model by adding imperceptible perturbations onto the raw speech waveform. We find that the speaker recognition system is also vulnerable to the attack, and we achieve a high success rate on the non-targeted attack. Besides, we also present an effective method to optimize the speaker recognition attacker to obtain a trade-off between the attack success rate with the perceptual quality. Experiments on the TIMIT dataset show that we can achieve a sentence error rate of $99.2%$ with an average SNR $57.2text{dB}$ and PESQ 4.2 with speed rather faster than real-time.
We revisit the mechanism for violating the weak cosmic-censorship conjecture (WCCC) by overspinning a nearly-extreme charged black hole. The mechanism consists of an incoming massless neutral scalar particle, with low energy and large angular momentum, tunneling into the hole. We investigate the effect of the large angular momentum of the incoming particle on the background geometry and address recent claims that such a back-reaction would invalidate the mechanism. We show that the large angular momentum of the incident particle does not constitute an obvious impediment to the success of the overspinning quantum mechanism, although the induced back-reaction turns out to be essential to restoring the validity of the WCCC in the classical regime. These results seem to endorse the view that the cosmic censor may be oblivious to processes involving quantum effects.
Some deep neural networks are invariant to some input transformations, such as Pointnet is permutation invariant to the input point cloud. In this paper, we demonstrated this property could be powerful in defense of gradient-based attacks. Specifically, we apply random input transformation which is invariant to the networks we want to defend. Extensive experiments demonstrate that the proposed scheme defeats various gradient-based attackers in the targeted attack setting, and breaking the attack accuracy into nearly zero. Our code is available at: {footnotesize{url{https://github.com/cuge1995/IT-Defense}}}.
Deep visual models are susceptible to adversarial perturbations to inputs. Although these signals are carefully crafted, they still appear noise-like patterns to humans. This observation has led to the argument that deep visual representation is misaligned with human perception. We counter-argue by providing evidence of human-meaningful patterns in adversarial perturbations. We first propose an attack that fools a network to confuse a whole category of objects (source class) with a target label. Our attack also limits the unintended fooling by samples from non-sources classes, thereby circumscribing human-defined semantic notions for network fooling. We show that the proposed attack not only leads to the emergence of regular geometric patterns in the perturbations, but also reveals insightful information about the decision boundaries of deep models. Exploring this phenomenon further, we alter the `adversarial objective of our attack to use it as a tool to `explain deep visual representation. We show that by careful channeling and projection of the perturbations computed by our method, we can visualize a models understanding of human-defined semantic notions. Finally, we exploit the explanability properties of our perturbations to perform image generation, inpainting and interactive image manipulation by attacking adversarialy robust `classifiers.In all, our major contribution is a novel pragmatic adversarial attack that is subsequently transformed into a tool to interpret the visual models. The article also makes secondary contributions in terms of establishing the utility of our attack beyond the adversarial objective with multiple interesting applications.