ترغب بنشر مسار تعليمي؟ اضغط هنا

Blockchain brings various advantages to online transactions. However, the total transparency of these transactions may leakage users sensitive information. Requirements on both cooperation and anonymity for companies/organizations become necessary. I n this paper, we propose a Multi-center Anonymous Blockchain-based (MAB) system, with joint management for the consortium and privacy protection for the participants. To achieve that, we formalize the syntax used by the MAB system and present a general construction based on a modular design. By applying cryptographic primitives to each module, we instantiate our scheme with anonymity and decentralization. Furthermore, we carry out a comprehensive formal analysis of the proposed solution. The results demonstrate our constructed scheme is secure and efficient.
Deep Neural Networks (DNN) are known to be vulnerable to adversarial samples, the detection of which is crucial for the wide application of these DNN models. Recently, a number of deep testing methods in software engineering were proposed to find the vulnerability of DNN systems, and one of them, i.e., Model Mutation Testing (MMT), was used to successfully detect various adversarial samples generated by different kinds of adversarial attacks. However, the mutated models in MMT are always huge in number (e.g., over 100 models) and lack diversity (e.g., can be easily circumvented by high-confidence adversarial samples), which makes it less efficient in real applications and less effective in detecting high-confidence adversarial samples. In this study, we propose Graph-Guided Testing (GGT) for adversarial sample detection to overcome these aforementioned challenges. GGT generates pruned models with the guide of graph characteristics, each of them has only about 5% parameters of the mutated model in MMT, and graph guided models have higher diversity. The experiments on CIFAR10 and SVHN validate that GGT performs much better than MMT with respect to both effectiveness and efficiency.
Defect detection and classification technology has changed from traditional artificial visual inspection to current intelligent automated inspection, but most of the current defect detection methods are training related detection models based on a da ta-driven approach, taking into account the difficulty of collecting some sample data in the industrial field. We apply zero-shot learning technology to the industrial field. Aiming at the problem of the existing Latent Feature Guide Attribute Attention (LFGAA) zero-shot image classification network, the output latent attributes and artificially defined attributes are different in the semantic space, which leads to the problem of model performance degradation, proposed an LGFAA network based on semantic feedback, and improved model performance by constructing semantic embedded modules and feedback mechanisms. At the same time, for the common domain shift problem in zero-shot learning, based on the idea of co-training algorithm using the difference information between different views of data to learn from each other, we propose an Ensemble Co-training algorithm, which adaptively reduces the prediction error in image tag embedding from multiple angles. Various experiments conducted on the zero-shot dataset and the cylinder liner dataset in the industrial field provide competitive results.
160 - Yahong Yang , Tao Luo , Yang Xiang 2021
In this paper, we prove the convergence from the atomistic model to the Peierls--Nabarro (PN) model of two-dimensional bilayer system with complex lattice. We show that the displacement field of the dislocation solution of the PN model converges to t he dislocation solution of the atomistic model with second-order accuracy. The consistency of PN model and the stability of atomistic model are essential in our proof. The main idea of our approach is to use several low-degree polynomials to approximate the energy due to atomistic interactions of different groups of atoms of the complex lattice.
Deep learning methods achieve great success in many areas due to their powerful feature extraction capabilities and end-to-end training mechanism, and recently they are also introduced for radio signal modulation classification. In this paper, we pro pose a novel deep learning framework called SigNet, where a signal-to-matrix (S2M) operator is adopted to convert the original signal into a square matrix first and is co-trained with a follow-up CNN architecture for classification. This model is further accelerated by integrating 1D convolution operators, leading to the upgraded model SigNet2.0. The experiments on two signal datasets show that both SigNet and SigNet2.0 outperform a number of well-known baselines, achieving the state-of-the-art performance. Notably, they obtain significantly higher accuracy than 1D-ResNet and 2D-CNN (at most increasing 70.5%), while much faster than LSTM (at most saving 88.0% training time). More interestingly, our proposed models behave extremely well in few-shot learning when a small training data set is provided. They can achieve a relatively high accuracy even when 1% training data are kept, while other baseline models may lose their effectiveness much more quickly as the datasets get smaller. Such result suggests that SigNet/SigNet2.0 could be extremely useful in the situations where labeled signal data are difficult to obtain.
114 - Tianpeng Jiang , Yang Xiang 2020
The optical resonance problem is similar to but different from time-steady Schr{o}dinger equation. One big challenge is that the eigenfunctions in resonance problem is exponentially growing. We give physical explanation to this boundary condition and introduce perfectly matched layer (PML) method to transform eigenfunctions from exponential-growth to exponential-decay. Based on the complex stretching technique, we construct a non-Hermitian Hamiltonian for the optical resonance problem. We successfully validate the effectiveness of the Hamiltonian by calculate its eigenvalues in the circular cavity and compare with the analytical results. We also use the proposed Hamiltonian to investigate the mode evolution around exceptional points in the quad-cosine cavity.
It is well known that elastic effects can cause surface instability. In this paper, we analyze a one-dimensional discrete system which can reveal pattern formation mechanism resembling the step-bunching phenomena for epitaxial growth on vicinal surfa ces. The surface steps are subject to long-range pairwise interactions taking the form of a general Lennard--Jones (LJ) type potential. It is characterized by two exponents $m$ and $n$ describing the singular and decaying behaviors of the interacting potential at small and large distances, and henceforth are called generalized LJ $(m,n)$ potential. We provide a systematic analysis of the asymptotic properties of the step configurations and the value of the minimum energy, in particular, their dependence on $m$ and $n$ and an additional parameter $alpha$ indicating the interaction range. Our results show that there is a phase transition between the bunching and non-bunching regimes. Moreover, some of our statements are applicable for any critical points of the energy, not necessarily minimizers. This work extends the technique and results of [Luo et al, SIAM MMS, 2016] which concentrates on the case of LJ (0,2) potential (originated from the elastic force monopole and dipole interactions between the steps). As a by-product, our result also leads to the well-known fact that the classical LJ (6,12) potential does not demonstrate step-bunching type phenomena.
Biomedical data are widely accepted in developing prediction models for identifying a specific tumor, drug discovery and classification of human cancers. However, previous studies usually focused on different classifiers, and overlook the class imbal ance problem in real-world biomedical datasets. There are a lack of studies on evaluation of data pre-processing techniques, such as resampling and feature selection, on imbalanced biomedical data learning. The relationship between data pre-processing techniques and the data distributions has never been analysed in previous studies. This article mainly focuses on reviewing and evaluating some popular and recently developed resampling and feature selection methods for class imbalance learning. We analyse the effectiveness of each technique from data distribution perspective. Extensive experiments have been done based on five classifiers, four performance measures, eight learning techniques across twenty real-world datasets. Experimental results show that: (1) resampling and feature selection techniques exhibit better performance using support vector machine (SVM) classifier. However, resampling and Feature Selection techniques perform poorly when using C4.5 decision tree and Linear discriminant analysis classifiers; (2) for datasets with different distributions, techniques such as Random undersampling and Feature Selection perform better than other data pre-processing methods with T Location-Scale distribution when using SVM and KNN (K-nearest neighbours) classifiers. Random oversampling outperforms other methods on Negative Binomial distribution using Random Forest classifier with lower level of imbalance ratio; (3) Feature Selection outperforms other data pre-processing methods in most cases, thus, Feature Selection with SVM classifier is the best choice for imbalanced biomedical data learning.
When observations are organized into groups where commonalties exist amongst them, the dependent random measures can be an ideal choice for modeling. One of the propositions of the dependent random measures is that the atoms of the posterior distribu tion are shared amongst groups, and hence groups can borrow information from each other. When normalized dependent random measures prior with independent increments are applied, we can derive appropriate exchangeable probability partition function (EPPF), and subsequently also deduce its inference algorithm given any mixture model likelihood. We provide all necessary derivation and solution to this framework. For demonstration, we used mixture of Gaussians likelihood in combination with a dependent structure constructed by linear combinations of CRMs. Our experiments show superior performance when using this framework, where the inferred values including the mixing weights and the number of clusters both respond appropriately to the number of completely random measure used.
Time-varying mixture densities occur in many scenarios, for example, the distributions of keywords that appear in publications may evolve from year to year, video frame features associated with multiple targets may evolve in a sequence. Any models th at realistically cater to this phenomenon must exhibit two important properties: the underlying mixture densities must have an unknown number of mixtures, and there must be some smoothness constraints in place for the adjacent mixture densities. The traditional Hierarchical Dirichlet Process (HDP) may be suited to the first property, but certainly not the second. This is due to how each random measure in the lower hierarchies is sampled independent of each other and hence does not facilitate any temporal correlations. To overcome such shortcomings, we proposed a new Smoothed Hierarchical Dirichlet Process (sHDP). The key novelty of this model is that we place a temporal constraint amongst the nearby discrete measures ${G_j}$ in the form of symmetric Kullback-Leibler (KL) Divergence with a fixed bound $B$. Although the constraint we place only involves a single scalar value, it nonetheless allows for flexibility in the corresponding successive measures. Remarkably, it also led us to infer the model within the stick-breaking process where the traditional Beta distribution used in stick-breaking is now replaced by a new constraint calculated from $B$. We present the inference algorithm and elaborate on its solutions. Our experiment using NIPS keywords has shown the desirable effect of the model.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا