ترغب بنشر مسار تعليمي؟ اضغط هنا

Existing deepfake-detection methods focus on passive detection, i.e., they detect fake face images via exploiting the artifacts produced during deepfake manipulation. A key limitation of passive detection is that it cannot detect fake faces that are generated by new deepfake generation methods. In this work, we propose FaceGuard, a proactive deepfake-detection framework. FaceGuard embeds a watermark into a real face image before it is published on social media. Given a face image that claims to be an individual (e.g., Nicolas Cage), FaceGuard extracts a watermark from it and predicts the face image to be fake if the extracted watermark does not match well with the individuals ground truth one. A key component of FaceGuard is a new deep-learning-based watermarking method, which is 1) robust to normal image post-processing such as JPEG compression, Gaussian blurring, cropping, and resizing, but 2) fragile to deepfake manipulation. Our evaluation on multiple datasets shows that FaceGuard can detect deepfakes accurately and outperforms existing methods.
120 - Yingbin Bai , Erkun Yang , Bo Han 2021
The memorization effect of deep neural network (DNN) plays a pivotal role in many state-of-the-art label-noise learning methods. To exploit this property, the early stopping trick, which stops the optimization at the early stage of training, is usual ly adopted. Current methods generally decide the early stopping point by considering a DNN as a whole. However, a DNN can be considered as a composition of a series of layers, and we find that the latter layers in a DNN are much more sensitive to label noise, while their former counterparts are quite robust. Therefore, selecting a stopping point for the whole network may make different DNN layers antagonistically affected each other, thus degrading the final performance. In this paper, we propose to separate a DNN into different parts and progressively train them to address this problem. Instead of the early stopping, which trains a whole DNN all at once, we initially train former DNN layers by optimizing the DNN with a relatively large number of epochs. During training, we progressively train the latter DNN layers by using a smaller number of epochs with the preceding layers fixed to counteract the impact of noisy labels. We term the proposed method as progressive early stopping (PES). Despite its simplicity, compared with the early stopping, PES can help to obtain more promising and stable results. Furthermore, by combining PES with existing approaches on noisy label training, we achieve state-of-the-art performance on image classification benchmarks.
Federated Learning (FL) is a promising framework that has great potentials in privacy preservation and in lowering the computation load at the cloud. FedAvg and FedProx are two widely adopted algorithms. However, recent work raised concerns on these two methods: (1) their fixed points do not correspond to the stationary points of the original optimization problem, and (2) the common model found might not generalize well locally. In this paper, we alleviate these concerns. Towards this, we adopt the statistical learning perspective yet allow the distributions to be heterogeneous and the local data to be unbalanced. We show, in the general kernel regression setting, that both FedAvg and FedProx converge to the minimax-optimal error rates. Moreover, when the kernel function has a finite rank, the convergence is exponentially fast. Our results further analytically quantify the impact of the model heterogeneity and characterize the federation gain - the reduction of the estimation error for a worker to join the federated learning compared to the best local estimator. To the best of our knowledge, we are the first to show the achievability of minimax error rates under FedAvg and FedProx, and the first to characterize the gains in joining FL. Numerical experiments further corroborate our theoretical findings on the statistical optimality of FedAvg and FedProx and the federation gains.
While backpropagation (BP) has been applied to spiking neural networks (SNNs) achieving encouraging results, a key challenge involved is to backpropagate a continuous-valued loss over layers of spiking neurons exhibiting discontinuous all-or-none fir ing activities. Existing methods deal with this difficulty by introducing compromises that come with their own limitations, leading to potential performance degradation. We propose a novel BP-like method, called neighborhood aggregation (NA), which computes accurate error gradients guiding weight updates that may lead to discontinuous modifications of firing activities. NA achieves this goal by aggregating finite differences of the loss over multiple perturbed membrane potential waveforms in the neighborhood of the present membrane potential of each neuron while utilizing a new membrane potential distance function. Our experiments show that the proposed NA algorithm delivers the state-of-the-art performance for SNN training on several datasets.
90 - Ken K. W. Ma , Kun Yang 2021
The black hole information paradox has been hotly debated for the last few decades, without full resolution. This makes it desirable to find analogs of this paradox in simple and experimentally accessible systems, whose resolutions may shed light on this long-standing and fundamental problem. Here we identify and resolve an apparent information paradox in a quantum Hall interface between the Halperin-331 and Pfaffian states. Information carried by pseudospin degree of freedom of the Abelian 331 quasiparticles gets scrambled when they cross the interface to enter non-Abelian Pfaffian state, and becomes inaccessible to local measurements; in this sense the Pfaffian region is an analog of black hole interior while the interface plays a role similar to its horizon. We demonstrate that the lost information gets recovered once the black hole evaporates and the quasiparticles return to the 331 region, albeit in a highly entangled form. Such recovery is quantified by the Page curve of the entropy carried by these quasiparticles, which are analogs of Hawking radiation.
192 - Chen He , Xie Xie , Kun Yang 2021
This paper considers an intelligent reflecting surface (IRS) assisted multi-input multi-output (MIMO) power splitting (PS) based simultaneous wireless information and power transfer (SWIPT) system with multiple PS receivers (PSRs). The objective is t o maximize the achievable data rate of the system by jointly optimizing the PS ratios at the PSRs, the active transmit beamforming (ATB) at the access point (AP), and the passive reflective beamforming (PRB) at the IRS, while the constraints on maximum transmission power at the AP, the reflective phase shift of each element at the IRS, the individual minimum harvested energy requirement of each PSR, and the domain of PS ratio of each PSR are all satisfied. For this unsolved problem, however, since the optimization variables are intricately coupled and the constraints are conflicting, the formulated problem is non-convex, and cannot be addressed by employing exist approaches directly. To this end, we propose a joint optimization framework to solve this problem. Particularly, we reformulate it as an equivalent form by employing the Lagrangian dual transform and the fractional programming transform, and decompose the transformed problem into several sub-problems. Then, we propose an alternate optimization algorithm by capitalizing on the dual sub-gradient method, the successive convex approximation method, and the penalty-based majorization-minimization approach, to solve the sub-problems iteratively, and obtain the optimal solutions in nearly closed-forms. Numerical simulation results verify the effectiveness of the IRS in SWIPT system and indicate that the proposed algorithm offers a substantial performance gain.
84 - Shuo Yang , Erkun Yang , Bo Han 2021
In label-noise learning, estimating the transition matrix is a hot topic as the matrix plays an important role in building statistically consistent classifiers. Traditionally, the transition from clean distribution to noisy distribution (i.e., clean label transition matrix) has been widely exploited to learn a clean label classifier by employing the noisy data. Motivated by that classifiers mostly output Bayes optimal labels for prediction, in this paper, we study to directly model the transition from Bayes optimal distribution to noisy distribution (i.e., Bayes label transition matrix) and learn a Bayes optimal label classifier. Note that given only noisy data, it is ill-posed to estimate either the clean label transition matrix or the Bayes label transition matrix. But favorably, Bayes optimal labels are less uncertain compared with the clean labels, i.e., the class posteriors of Bayes optimal labels are one-hot vectors while those of clean labels are not. This enables two advantages to estimate the Bayes label transition matrix, i.e., (a) we could theoretically recover a set of Bayes optimal labels under mild conditions; (b) the feasible solution space is much smaller. By exploiting the advantages, we estimate the Bayes label transition matrix by employing a deep neural network in a parameterized way, leading to better generalization and superior classification performance.
285 - Ke-Qi Ding , Kun Yang , Xiang Yang 2021
The self-similar Richardson cascade admits two logically possible scenarios of small-scale turbulence at high Reynolds numbers. In the first scenario, eddies population densities vary as a function of eddies scales. As a result, one or a few eddy typ es dominate at small scales, and small-scale turbulence lacks diversity. In the second scenario, eddies population densities are scale-invariant across the inertial range, resulting in small-scale diversity. That is, there are as many types of eddies at the small scales as at the large scales. In this letter, we measure eddies population densities in three-dimensional isotropic turbulence and determine the nature of small-scale turbulence. The result shows that eddies population densities are scale-invariant.
84 - Yihong Wu , Pengkun Yang 2021
This survey provides an exposition of a suite of techniques based on the theory of polynomials, collectively referred to as polynomial methods, which have recently been applied to address several challenging problems in statistical inference successf ully. Topics including polynomial approximation, polynomial interpolation and majorization, moment space and positive polynomials, orthogonal polynomials and Gaussian quadrature are discussed, with their major probabilistic and statistical applications in property estimation on large domains and learning mixture models. These techniques provide useful tools not only for the design of highly practical algorithms with provable optimality, but also for establishing the fundamental limits of the inference problems through the method of moment matching. The effectiveness of the polynomial method is demonstrated in concrete problems such as entropy and support size estimation, distinct elements problem, and learning Gaussian mixture models.
99 - Chushun Tian , Kun Yang 2021
The {it exchange} interaction arising from the particle indistinguishability is of central importance to physics of many-particle quantum systems. Here we study analytically the dynamical generation of quantum entanglement induced by this interaction in an isolated system, namely, an ideal Fermi gas confined in a chaotic cavity, which evolves unitarily from a non-Gaussian pure state. We find that the breakdown of the quantum-classical correspondence of particle motion, via dramatically changing the spatial structure of many-body wavefunction, leads to profound changes of the entanglement structure. Furthermore, for a class of initial states, such change leads to the approach to thermal equilibrium everywhere in the cavity, with the well-known Ehrenfest time in quantum chaos as the thermalization time. Specifically, the quantum expectation values of various correlation functions at different spatial scales are all determined by the Fermi-Dirac distribution. In addition, by using the reduced density matrix (RDM) and the entanglement entropy (EE) as local probes, we find that the gas inside a subsystem is at equilibrium with that outside, and its thermal entropy is the EE, even though the whole system is in a pure state. As a by-product of this work, we provide an analytical solution supporting an important conjecture on thermalization, made and numerically studied by Garrison and Grover in: Phys. Rev. X textbf{8}, 021026 (2018), and strengthen its statement.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا