Do you want to publish a course? Click here

131 - Bo Lan , Xue-xiang Xu 2021
Based on N different coherent states with equal weights and phase-space rotation symmetry, we introduce N-headed incoherent superposition states (NHICSSs) and N-headed coherent superposition states (NHCSSs). These N coherent states are associated with N-order roots of the same complex number. We study and compare properties of NHICSSs and NHCSSs, including average photon number, Mandel Q parameter, quadrature squeezing, Fock matrix elements and Wigner function. Among all these states, only 2HCSS (i.e., Schrodinger cat state) presents quadrature-squeezing effect. Our theoretical results can be used as a reference for researchers in this field.
Recapturing attack can be employed as a simple but effective anti-forensic tool for digital document images. Inspired by the document inspection process that compares a questioned document against a reference sample, we proposed a document recapture detection scheme by employing Siamese network to compare and extract distinct features in a recapture document image. The proposed algorithm takes advantages of both metric learning and image forensic techniques. Instead of adopting Euclidean distance-based loss function, we integrate the forensic similarity function with a triplet loss and a normalized softmax loss. After training with the proposed triplet selection strategy, the resulting feature embedding clusters the genuine samples near the reference while pushes the recaptured samples apart. In the experiment, we consider practical domain generalization problems, such as the variations in printing/imaging devices, substrates, recapturing channels, and document types. To evaluate the robustness of different approaches, we benchmark some popular off-the-shelf machine learning-based approaches, a state-of-the-art document image detection scheme, and the proposed schemes with different network backbones under various experimental protocols. Experimental results show that the proposed schemes with different network backbones have consistently outperformed the state-of-the-art approaches under different experimental settings. Specifically, under the most challenging scenario in our experiment, i.e., evaluation across different types of documents that produced by different devices, we have achieved less than 5.00% APCER (Attack Presentation Classification Error Rate) and 5.56% BPCER (Bona Fide Presentation Classification Error Rate) by the proposed network with ResNeXt101 backbone at 5% BPCER decision threshold.
131 - Bo Wu , Bo Lang 2020
To enhance the ability of neural networks to extract local point cloud features and improve their quality, in this paper, we propose a multiscale graph generation method and a self-adaptive graph convolution method. First, we propose a multiscale graph generation method for point clouds. This approach transforms point clouds into a structured multiscale graph form that supports multiscale analysis of point clouds in the scale space and can obtain the dimensional features of point cloud data at different scales, thus making it easier to obtain the best point cloud features. Because traditional convolutional neural networks are not applicable to graph data with irregular vertex neighborhoods, this paper presents an sef-adaptive graph convolution kernel that uses the Chebyshev polynomial to fit an irregular convolution filter based on the theory of optimal approximation. In this paper, we adopt max pooling to synthesize the features of different scale maps and generate the point cloud features. In experiments conducted on three widely used public datasets, the proposed method significantly outperforms other state-of-the-art models, demonstrating its effectiveness and generalizability.
Orthogonal matrix has shown advantages in training Recurrent Neural Networks (RNNs), but such matrix is limited to be square for the hidden-to-hidden transformation in RNNs. In this paper, we generalize such square orthogonal matrix to orthogonal rectangular matrix and formulating this problem in feed-forward Neural Networks (FNNs) as Optimization over Multiple Dependent Stiefel Manifolds (OMDSM). We show that the rectangular orthogonal matrix can stabilize the distribution of network activations and regularize FNNs. We also propose a novel orthogonal weight normalization method to solve OMDSM. Particularly, it constructs orthogonal transformation over proxy parameters to ensure the weight matrix is orthogonal and back-propagates gradient information through the transformation during training. To guarantee stability, we minimize the distortions between proxy parameters and canonical weights over all tractable orthogonal transformations. In addition, we design an orthogonal linear module (OLM) to learn orthogonal filter banks in practice, which can be used as an alternative to standard linear module. Extensive experiments demonstrate that by simply substituting OLM for standard linear module without revising any experimental protocols, our method largely improves the performance of the state-of-the-art networks, including Inception and residual networks on CIFAR and ImageNet datasets. In particular, we have reduced the test error of wide residual network on CIFAR-100 from 20.04% to 18.61% with such simple substitution. Our code is available online for result reproduction.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا