ترغب بنشر مسار تعليمي؟ اضغط هنا

86 - Jia-Cheng He , Jie Hou , Yan Chen 2021
We develop the Gutzwiller approximation method to obtain the renormalized Hamiltonian of the SU(4) $t$-$J$ model, with the corresponding renormalization factors. Subsequently, a mean-field theory is employed on the renormalized Hamiltonian of the mod el on the honeycomb lattice under the scenario of a cooperative condensation of carriers moving in the resonating valence bond state of flavors. In particular, we find the extended $s$-wave superconductivity is much more favorable than the $d+id$ superconductivity in the doping range close to quarter filling. The pairing states of the SU(4) case reveal the property that the spin-singlet pairing and the spin-triplet pairing can exist simultaneously. Our results might provide new insights into the twisted bilayer graphene system.
138 - Jia-Cheng He , Yan Chen 2021
A theoretical formalism of Andreev reflection is proposed to provide a theoretical support for distinguishing the singlet pairing and the triplet pairing by the point contact Andreev reflection (PCAR) experiments, in contrast to the previous models d esigned only for the singlet pairing case. We utilize our theoretical curves to fit the data of the PCAR experiment on the unconventional superconductivity in the $Bi/Ni$ bilayer [arXiv:1810.10403], and find the Anderson-Brinkman-Morel state satisfies the main characteristics of the experimental data. Moreover, the Andreev reflection spectra of the Balian-Werthamer state and the chiral $p$-wave are also presented.
Error entropy is a important nonlinear similarity measure, and it has received increasing attention in many practical applications. The default kernel function of error entropy criterion is Gaussian kernel function, however, which is not always the b est choice. In our study, a novel concept, called generalized error entropy, utilizing the generalized Gaussian density (GGD) function as the kernel function is proposed. We further derivate the generalized minimum error entropy (GMEE) criterion, and a novel adaptive filtering called GMEE algorithm is derived by utilizing GMEE criterion. The stability, steady-state performance, and computational complexity of the proposed algorithm are investigated. Some simulation indicate that the GMEE algorithm performs well in Gaussian, sub-Gaussian, and super-Gaussian noises environment, respectively. Finally, the GMEE algorithm is applied to acoustic echo cancelation and performs well.
199 - Zecheng He , Ruby B. Lee 2021
In cloud computing, it is desirable if suspicious activities can be detected by automatic anomaly detection systems. Although anomaly detection has been investigated in the past, it remains unsolved in cloud computing. Challenges are: characterizing the normal behavior of a cloud server, distinguishing between benign and malicious anomalies (attacks), and preventing alert fatigue due to false alarms. We propose CloudShield, a practical and generalizable real-time anomaly and attack detection system for cloud computing. Cloudshield uses a general, pretrained deep learning model with different cloud workloads, to predict the normal behavior and provide real-time and continuous detection by examining the model reconstruction error distributions. Once an anomaly is detected, to reduce alert fatigue, CloudShield automatically distinguishes between benign programs, known attacks, and zero-day attacks, by examining the prediction error distributions. We evaluate the proposed CloudShield on representative cloud benchmarks. Our evaluation shows that CloudShield, using model pretraining, can apply to a wide scope of cloud workloads. Especially, we observe that CloudShield can detect the recently proposed speculative execution attacks, e.g., Spectre and Meltdown attacks, in milliseconds. Furthermore, we show that CloudShield accurately differentiates and prioritizes known attacks, and potential zero-day attacks, from benign programs. Thus, it significantly reduces false alarms by up to 99.0%.
The combination of the linear size from reverberation mapping (RM) and the angular distance of the broad line region (BLR) from spectroastrometry (SA) in active galactic nuclei (AGNs) can be used to measure the Hubble constant $H_0$. Recently, Wang e t al. (2020) successfully employed this approach and estimated $H_0$ from 3C 273. However, there may be a systematic deviation between the response-weighted radius (RM measurement) and luminosity-weighted radius (SA measurement), especially when different broad lines are adopted for size indicators (e.g., hb for RM and pa for SA). Here we evaluate the size deviations measured by six pairs of hydrogen lines (e.g., hb, ha and pa) via the locally optimally emitting cloud (LOC) models of BLR. We find that the radius ratios $K$(=$R_{rm SA}$/$R_{rm RM}$) of the same line deviated systematically from 1 (0.85-0.88) with dispersions between 0.063-0.083. Surprisingly, the $K$ values from the pa(SA)/hb(RM) and ha(SA)/hb(RM) pairs not only are closest to 1 but also have considerably smaller uncertainty. Considering the current infrared interferometry technology, the pa(SA)/hb(RM) pair is the ideal choice for the low redshift objects in the SARM project. In the future, the ha(SA)/hb(RM) pair could be used for the high redshift luminous quasars. These theoretical estimations of the SA/RM radius pave the way for the future SARM measurements to further constrain the standard cosmological model.
We present an analysis of the variability of broad absorption lines (BALs) in a quasar SDSS J141955.26+522741.1 at $z=2.145$ with 72 observations from the Sloan Digital Sky Survey Data Release 16 (SDSS DR16). The strong correlation between the equiva lent widths of BAL and the continuum luminosity, reveals that the variation of BAL trough is dominated by the photoionization. The photoionization model predicts that when the time interval $Delta T$ between two observations is longer than the recombination timescale $t_{rm rec}$, the BAL variations can be detected. This can be characterized as a sharp rise in the detection rate of BAL variation at $Delta T=t_{rm rec}$. For the first time, we detect such a sharp rise signature in the detection rate of BAL variations. As a result, we propose that the $t_{rm rec}$ can be obtained from the sharp rise of the detection rate of BAL variation. It is worth mentioning that the BAL variations are detected at the time-intervals less than the $t_{rm rec}$ for half an order of magnitude in two individual troughs. This result indicates that there may be multiple components with different $t_{rm rec}$ but the same velocity in an individual trough.
Understanding the origin of feii emission is important because it is crucial to construct the main sequence of Active Galactic Nuclei (AGNs). Despite several decades of observational and theoretical effort, the location of the optical iron emitting r egion and the mechanism responsible for the positive correlation between the feii strength and the black hole accretion rate remain open questions as yet. In this letter, we report the optical feii response to the central outburst in PS1-10adi, a candidate tidal disruption event (TDE) taking place in an AGN at $z = 0.203$ that has aroused extensive attention. For the first time, we observe that the feii response in the rising phase of its central luminosity is significantly more prominent than that in the decline phase, showing a hysteresis effect. We interpret this hysteresis effect as a consequence of the gradual sublimation of the dust grains situating at the inner surface of the torus into gas when the luminosity of the central engine increases. It is the iron element released from the sublimated dust that contributes evidently to the observed feii emission. This interpretation, together with the weak response of the hb emission as we observe, naturally explains the applicability of relative feii strength as a tracer of the Eddington ratio. In addition, optical iron emission of this origin renders the feii time lag a potential standard candle with cosmological implications.
Unsupervised skill discovery drives intelligent agents to explore the unknown environment without task-specific reward signal, and the agents acquire various skills which may be useful when the agents adapt to new tasks. In this paper, we propose Mul ti-agent Skill Discovery(MASD), a method for discovering skills for coordination patterns of multiple agents. The proposed method aims to maximize the mutual information between a latent code Z representing skills and the combination of the states of all agents. Meanwhile it suppresses the empowerment of Z on the state of any single agent by adversarial training. In another word, it sets an information bottleneck to avoid empowerment degeneracy. First we show the emergence of various skills on the level of coordination in a general particle multi-agent environment. Second, we reveal that the bottleneck prevents skills from collapsing to a single agent and enhances the diversity of learned skills. Finally, we show the pretrained policies have better performance on supervised RL tasks.
Recently, increasing works have proposed to drive evolutionary algorithms using machine learning models. Usually, the performance of such model based evolutionary algorithms is highly dependent on the training qualities of the adopted models. Since i t usually requires a certain amount of data (i.e. the candidate solutions generated by the algorithms) for model training, the performance deteriorates rapidly with the increase of the problem scales, due to the curse of dimensionality. To address this issue, we propose a multi-objective evolutionary algorithm driven by the generative adversarial networks (GANs). At each generation of the proposed algorithm, the parent solutions are first classified into real and fake samples to train the GANs; then the offspring solutions are sampled by the trained GANs. Thanks to the powerful generative ability of the GANs, our proposed algorithm is capable of generating promising offspring solutions in high-dimensional decision space with limited training data. The proposed algorithm is tested on 10 benchmark problems with up to 200 decision variables. Experimental results on these test problems demonstrate the effectiveness of the proposed algorithm.
Recently, more and more works have proposed to drive evolutionary algorithms using machine learning models.Usually, the performance of such model based evolutionary algorithms is highly dependent on the training qualities of the adopted models.Since it usually requires a certain amount of data (i.e. the candidate solutions generated by the algorithms) for model training, the performance deteriorates rapidly with the increase of the problem scales, due to the curse of dimensionality.To address this issue, we propose a multi-objective evolutionary algorithm driven by the generative adversarial networks (GANs).At each generation of the proposed algorithm, the parent solutions are first classified into emph{real} and emph{fake} samples to train the GANs; then the offspring solutions are sampled by the trained GANs.Thanks to the powerful generative ability of the GANs, our proposed algorithm is capable of generating promising offspring solutions in high-dimensional decision space with limited training data.The proposed algorithm is tested on 10 benchmark problems with up to 200 decision variables.Experimental results on these test problems demonstrate the effectiveness of the proposed algorithm.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا