ترغب بنشر مسار تعليمي؟ اضغط هنا

The parameters of a $q$-ary MDS Euclidean self-dual codes are completely determined by its length and the construction of MDS Euclidean self-dual codes with new length has been widely investigated in recent years. In this paper, we give a further stu dy on the construction of MDS Euclidean self-dual codes via generalized Reed-Solomon (GRS) codes and their extended codes. The main idea of our construction is to choose suitable evaluation points such that the corresponding (extended) GRS codes are Euclidean self-dual. Firstly, we consider the evaluation set consists of two disjoint subsets, one of which is based on the trace function, the other one is a union of a subspace and its cosets. Then four new families of MDS Euclidean self-dual codes are constructed. Secondly, we give a simple but useful lemma to ensure that the symmetric difference of two intersecting subsets of finite fields can be taken as the desired evaluation set. Based on this lemma, we generalize our first construction and provide two new families of MDS Euclidean self-dual codes. Finally, by using two multiplicative subgroups and their cosets which have nonempty intersection, we present three generic constructions of MDS Euclidean self-dual codes with flexible parameters. Several new families of MDS Euclidean self-dual codes are explicitly constructed.
A linear code is called an MDS self-dual code if it is both an MDS code and a self-dual code with respect to the Euclidean inner product. The parameters of such codes are completely determined by the code length. In this paper, we consider new constr uctions of MDS self-dual codes via generalized Reed-Solomon (GRS) codes and their extended codes. The critical idea of our constructions is to choose suitable evaluation points such that the corresponding (extended) GRS codes are self-dual. The evaluation set of our constructions is consists of a subgroup of finite fields and its cosets in a bigger subgroup. Four new families of MDS self-dual codes are obtained and they have better parameters than previous results in certain region. Moreover, by the Mobius action over finite fields, we give a systematic way to construct self-dual GRS codes with different evaluation points provided any known self-dual GRS codes. Specially, we prove that all the self-dual extended GRS codes over $mathbb{F}_{q}$ with length $n< q+1$ can be constructed from GRS codes with the same parameters.
The study on improving the robustness of deep neural networks against adversarial examples grows rapidly in recent years. Among them, adversarial training is the most promising one, which flattens the input loss landscape (loss change with respect to input) via training on adversarially perturbed examples. However, how the widely used weight loss landscape (loss change with respect to weight) performs in adversarial training is rarely explored. In this paper, we investigate the weight loss landscape from a new perspective, and identify a clear correlation between the flatness of weight loss landscape and robust generalization gap. Several well-recognized adversarial training improvements, such as early stopping, designing new objective functions, or leveraging unlabeled data, all implicitly flatten the weight loss landscape. Based on these observations, we propose a simple yet effective Adversarial Weight Perturbation (AWP) to explicitly regularize the flatness of weight loss landscape, forming a double-perturbation mechanism in the adversarial training framework that adversarially perturbs both inputs and weights. Extensive experiments demonstrate that AWP indeed brings flatter weight loss landscape and can be easily incorporated into various existing adversarial training methods to further boost their adversarial robustness.
Training deep neural networks (DNNs) in the presence of noisy labels is an important and challenging task. Probabilistic modeling, which consists of a classifier and a transition matrix, depicts the transformation from true labels to noisy labels and is a promising approach. However, recent probabilistic methods directly apply transition matrix to DNN, neglect DNNs susceptibility to overfitting, and achieve unsatisfactory performance, especially under the uniform noise. In this paper, inspired by label smoothing, we proposed a novel method, in which a smoothed transition matrix is used for updating DNN, to restrict the overfitting of DNN in probabilistic modeling. Our method is termed Matrix Smoothing. We also empirically demonstrate that our method not only improves the robustness of probabilistic modeling significantly, but also even obtains a better estimation of the transition matrix.
Skip connections are an essential component of current state-of-the-art deep neural networks (DNNs) such as ResNet, WideResNet, DenseNet, and ResNeXt. Despite their huge success in building deeper and more powerful DNNs, we identify a surprising secu rity weakness of skip connections in this paper. Use of skip connections allows easier generation of highly transferable adversarial examples. Specifically, in ResNet-like (with skip connections) neural networks, gradients can backpropagate through either skip connections or residual modules. We find that using more gradients from the skip connections rather than the residual modules according to a decay factor, allows one to craft adversarial examples with high transferability. Our method is termed Skip Gradient Method(SGM). We conduct comprehensive transfer attacks against state-of-the-art DNNs including ResNets, DenseNets, Inceptions, Inception-ResNet, Squeeze-and-Excitation Network (SENet) and robustly trained DNNs. We show that employing SGM on the gradient flow can greatly improve the transferability of crafted attacks in almost all cases. Furthermore, SGM can be easily combined with existing black-box attack techniques, and obtain high improvements over state-of-the-art transferability methods. Our findings not only motivate new research into the architectural vulnerability of DNNs, but also open up further challenges for the design of secure DNN architectures.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا