ترغب بنشر مسار تعليمي؟ اضغط هنا

109 - Anh Bui , Trung Le , He Zhao 2021
Contrastive learning (CL) has recently emerged as an effective approach to learning representation in a range of downstream tasks. Central to this approach is the selection of positive (similar) and negative (dissimilar) sets to provide the model the opportunity to `contrast between data and class representation in the latent space. In this paper, we investigate CL for improving model robustness using adversarial samples. We first designed and performed a comprehensive study to understand how adversarial vulnerability behaves in the latent space. Based on these empirical evidences, we propose an effective and efficient supervised contrastive learning to achieve model robustness against adversarial attacks. Moreover, we propose a new sample selection strategy that optimizes the positive/negative sets by removing redundancy and improving correlation with the anchor. Experiments conducted on benchmark datasets show that our Adversarial Supervised Contrastive Learning (ASCL) approach outperforms the state-of-the-art defenses by $2.6%$ in terms of the robust accuracy, whilst our ASCL with the proposed selection strategy can further gain $1.4%$ improvement with only $42.8%$ positives and $6.3%$ negatives compared with ASCL without a selection strategy.
146 - Anh Bui , Trung Le , He Zhao 2020
Ensemble-based adversarial training is a principled approach to achieve robustness against adversarial attacks. An important technique of this approach is to control the transferability of adversarial examples among ensemble members. We propose in th is work a simple yet effective strategy to collaborate among committee models of an ensemble model. This is achieved via the secure and insecure sets defined for each model member on a given sample, hence help us to quantify and regularize the transferability. Consequently, our proposed framework provides the flexibility to reduce the adversarial transferability as well as to promote the diversity of ensemble members, which are two crucial factors for better robustness in our ensemble approach. We conduct extensive and comprehensive experiments to demonstrate that our proposed method outperforms the state-of-the-art ensemble baselines, at the same time can detect a wide range of adversarial examples with a nearly perfect accuracy.
Let $displaystyle L = -frac{1}{w} , mathrm{div}(A , abla u) + mu$ be the generalized degenerate Schrodinger operator in $L^2_w(mathbb{R}^d)$ with $dge 3$ with suitable weight $w$ and measure $mu$. The main aim of this paper is threefold. First, we o btain an upper bound for the fundamental solution of the operator $L$. Secondly, we prove some estimates for the heat kernel of $L$ including an upper bound, the Holder continuity and a comparison estimate. Finally, we apply the results to study the maximal function characterization for the Hardy spaces associated to the critical function generated by the operator $L$.
149 - Anh Bui , Trung Le , He Zhao 2020
The fact that deep neural networks are susceptible to crafted perturbations severely impacts the use of deep learning in certain domains of application. Among many developed defense models against such attacks, adversarial training emerges as the mos t successful method that consistently resists a wide range of attacks. In this work, based on an observation from a previous study that the representations of a clean data example and its adversarial examples become more divergent in higher layers of a deep neural net, we propose the Adversary Divergence Reduction Network which enforces local/global compactness and the clustering assumption over an intermediate layer of a deep neural network. We conduct comprehensive experiments to understand the isolating behavior of each component (i.e., local/global compactness and the clustering assumption) and compare our proposed model with state-of-the-art adversarial training methods. The experimental results demonstrate that augmenting adversarial training with our proposed components can further improve the robustness of the network, leading to higher unperturbed and adversarial predictive performances.
171 - The Anh Bui , Ji Li , Fu Ken Ly 2020
We study weighted Besov and Triebel--Lizorkin spaces associated with Hermite expansions and obtain (i) frame decompositions, and (ii) characterizations of continuous Sobolev-type embeddings. The weights we consider generalize the Muckhenhoupt weights.
Internet users increasingly rely on commercial virtual private network (VPN) services to protect their security and privacy. The VPN services route the clients traffic over an encrypted tunnel to a VPN gateway in the cloud. Thus, they hide the client s real IP address from online services, and they also shield the users connections from perceived threats in the access networks. In this paper, we study the security of such commercial VPN services. The focus is on how the client applications set up VPN tunnels, and how the service providers instruct users to configure generic client software. We analyze common VPN protocols and implementations on Windows, macOS and Ubuntu. We find that the VPN clients have various configuration flaws, which an attacker can exploit to strip off traffic encryption or to bypass authentication of the VPN gateway. In some cases, the attacker can also steal the VPN users username and password. We suggest ways to mitigate each of the discovered vulnerabilities.
Cloud-application add-ons are microservices that extend the functionality of the core applications. Many application vendors have opened their APIs for third-party developers and created marketplaces for add-ons (also add-ins or apps). This is a rela tively new phenomenon, and its effects on the application security have not been widely studied. It seems likely that some of the add-ons have lower code quality than the core applications themselves and, thus, may bring in security vulnerabilities. We found that many such add-ons are vulnerable to cross-site scripting (XSS). The attacker can take advantage of the document-sharing and messaging features of the cloud applications to send malicious input to them. The vulnerable add-ons then execute client-side JavaScript from the carefully crafted malicious input. In a major analysis effort, we systematically studied 300 add-ons for three popular application suites, namely Microsoft Office Online, G Suite and Shopify, and discovered a significant percentage of vulnerable add-ons in each marketplace. We present the results of this study, as well as analyze the add-on architectures to understand how the XSS vulnerabilities can be exploited and how the threat can be mitigated.
Let $X$ be a space of homogeneous type and let $L$ be a nonnegative self-adjoint operator on $L^2(X)$ which satisfies a Gaussian estimate on its heat kernel. In this paper we prove a Homander type spectral multiplier theorem for $L$ on the Besov and Triebel--Lizorkin spaces associated to $L$. Our work not only recovers the boundedness of the spectral multipliers on $L^p$ spaces and Hardy spaces associated to $L$, but also is the first one which proves the boundedness of a general spectral theorem on Besov and Triebel--Lizorkin spaces.
We propose two new techniques for training Generative Adversarial Networks (GANs). Our objectives are to alleviate mode collapse in GAN and improve the quality of the generated samples. First, we propose neighbor embedding, a manifold learning-based regularization to explicitly retain local structures of latent samples in the generated samples. This prevents generator from producing nearly identical data samples from different latent samples, and reduces mode collapse. We propose an inverse t-SNE regularizer to achieve this. Second, we propose a new technique, gradient matching, to align the distributions of the generated samples and the real samples. As it is challenging to work with high-dimensional sample distributions, we propose to align these distributions through the scalar discriminator scores. We constrain the difference between the discriminator scores of the real samples and generated ones. We further constrain the difference between the gradients of these discriminator scores. We derive these constraints from Taylor approximations of the discriminator function. We perform experiments to demonstrate that our proposed techniques are computationally simple and easy to be incorporated in existing systems. When Gradient matching and Neighbour embedding are applied together, our GN-GAN achieves outstanding results on 1D/2D synthetic, CIFAR-10 and STL-10 datasets, e.g. FID score of $30.80$ for the STL-10 dataset. Our code is available at: https://github.com/tntrung/gan
Let $X$ be a space of homogeneous type and $L$ be a nonnegative self-adjoint operator on $L^2(X)$ satisfying Gaussian upper bounds on its heat kernels. In this paper we develop the theory of weighted Besov spaces $dot{B}^{alpha,L}_{p,q,w}(X)$ and wei ghted Triebel--Lizorkin spaces $dot{F}^{alpha,L}_{p,q,w}(X)$ associated to the operator $L$ for the full range $0<p,qle infty$, $alphain mathbb R$ and $w$ being in the Muckenhoupt weight class $A_infty$. Similarly to the classical case in the Euclidean setting, we prove that our new spaces satisfy important features such as continuous charaterizations in terms of square functions, atomic decompositions and the identifications with some well known function spaces such as Hardy type spaces and Sobolev type spaces. Moreover, with extra assumptions on the operator $L$, we prove that the new function spaces associated to $L$ coincide with the classical function spaces. Finally we apply our results to prove the boundedness of the fractional power of $L$ and the spectral multiplier of $L$ in our new function spaces.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا