ترغب بنشر مسار تعليمي؟ اضغط هنا

The LHCb Collaboration recently gave an update on testing lepton flavour universality with $B^+ rightarrow K^+ ell^+ ell^-$, in which a 3.1 standard deviations from the standard model prediction was observed. The g-2 experiment also reports a 3.3 sta ndard deviations from the standard model on muon anomalous magnetic moment measurement. These deviations could be explained by introducing new particles including leptoquarks. In this paper, we show the possibility to search for heavy spin-1 leptoquarks at a future TeV scale muon collider by performing studies from three channels: 1) same flavour final states with either two bottom or two light quarks, 2) different flavour quark final states, and 3) a so-called VXS process representing the scattering between a vector boson and a leptoquark to probe the coupling between leptoquark and tau lepton. We conclude that a 3 TeV muon collider with $3~{ab^{-1}}$ of integrated luminosity is already sufficient to cover the leptoquark parameter space in order to explain the LHCb lepton flavour universality anomaly.
318 - Hang Li , Yu Kang , Tianqiao Liu 2021
Existing audio-language task-specific predictive approaches focus on building complicated late-fusion mechanisms. However, these models are facing challenges of overfitting with limited labels and low model generalization abilities. In this paper, we present a Cross-modal Transformer for Audio-and-Language, i.e., CTAL, which aims to learn the intra-modality and inter-modality connections between audio and language through two proxy tasks on a large amount of audio-and-language pairs: masked language modeling and masked cross-modal acoustic modeling. After fine-tuning our pre-trained model on multiple downstream audio-and-language tasks, we observe significant improvements across various tasks, such as, emotion classification, sentiment analysis, and speaker verification. On this basis, we further propose a specially-designed fusion mechanism that can be used in fine-tuning phase, which allows our pre-trained model to achieve better performance. Lastly, we demonstrate detailed ablation studies to prove that both our novel cross-modality fusion component and audio-language pre-training methods significantly contribute to the promising results.
Neural architecture search (NAS), which automatically designs the architectures of deep neural networks, has achieved breakthrough success over many applications in the past few years. Among different classes of NAS methods, evolutionary computation based NAS (ENAS) methods have recently gained much attention. Unfortunately, the issues of fair comparisons and efficient evaluations have hindered the development of ENAS. The current benchmark architecture datasets designed for fair comparisons only provide the datasets, not the ENAS algorithms or the platform to run the algorithms. The existing efficient evaluation methods are either not suitable for the population-based ENAS algorithm or are too complex to use. This paper develops a platform named BenchENAS to address these issues. BenchENAS aims to achieve fair comparisons by running different algorithms in the same environment and with the same settings. To achieve efficient evaluation in a common lab environment, BenchENAS designs a parallel component and a cache component with high maintainability. Furthermore, BenchENAS is easy to install and highly configurable and modular, which brings benefits in good usability and easy extensibility. The paper conducts efficient comparison experiments on eight ENAS algorithms with high GPU utilization on this platform. The experiments validate that the fair comparison issue does exist, and BenchENAS can alleviate this issue. A website has been built to promote BenchENAS at https://benchenas.com, where interested researchers can obtain the source code and document of BenchENAS for free.
Semantic segmentation is a crucial image understanding task, where each pixel of image is categorized into a corresponding label. Since the pixel-wise labeling for ground-truth is tedious and labor intensive, in practical applications, many works exp loit the synthetic images to train the model for real-word image semantic segmentation, i.e., Synthetic-to-Real Semantic Segmentation (SRSS). However, Deep Convolutional Neural Networks (CNNs) trained on the source synthetic data may not generalize well to the target real-world data. In this work, we propose two simple yet effective texture randomization mechanisms, Global Texture Randomization (GTR) and Local Texture Randomization (LTR), for Domain Generalization based SRSS. GTR is proposed to randomize the texture of source images into diverse unreal texture styles. It aims to alleviate the reliance of the network on texture while promoting the learning of the domain-invariant cues. In addition, we find the texture difference is not always occurred in entire image and may only appear in some local areas. Therefore, we further propose a LTR mechanism to generate diverse local regions for partially stylizing the source images. Finally, we implement a regularization of Consistency between GTR and LTR (CGL) aiming to harmonize the two proposed mechanisms during training. Extensive experiments on five publicly available datasets (i.e., GTA5, SYNTHIA, Cityscapes, BDDS and Mapillary) with various SRSS settings (i.e., GTA5/SYNTHIA to Cityscapes/BDDS/Mapillary) demonstrate that the proposed method is superior to the state-of-the-art methods for domain generalization based SRSS.
Neural Architecture Search (NAS) can automatically design well-performed architectures of Deep Neural Networks (DNNs) for the tasks at hand. However, one bottleneck of NAS is the prohibitively computational cost largely due to the expensive performan ce evaluation. The neural predictors can directly estimate the performance without any training of the DNNs to be evaluated, thus have drawn increasing attention from researchers. Despite their popularity, they also suffer a severe limitation: the shortage of annotated DNN architectures for effectively training the neural predictors. In this paper, we proposed Homogeneous Architecture Augmentation for Neural Predictor (HAAP) of DNN architectures to address the issue aforementioned. Specifically, a homogeneous architecture augmentation algorithm is proposed in HAAP to generate sufficient training data taking the use of homogeneous representation. Furthermore, the one-hot encoding strategy is introduced into HAAP to make the representation of DNN architectures more effective. The experiments have been conducted on both NAS-Benchmark-101 and NAS-Bench-201 dataset. The experimental results demonstrate that the proposed HAAP algorithm outperforms the state of the arts compared, yet with much less training data. In addition, the ablation studies on both benchmark datasets have also shown the universality of the homogeneous architecture augmentation.
We investigate steady-state thermal transport and photon statistics in a nonequilibrium hybrid quantum system, in which a qubit is longitudinally and quadratically coupled to an optical resonator. Our calculations are conducted with the method of the quantum dressed master equation combined with full counting statistics. The effect of negative differential thermal conductance is unravelled at finite temperature bias, which stems from the suppression of cyclic heat transitions and large mismatch of two squeezed photon modes at weak and strong qubit-resonator hybridizations, respectively. The giant thermal rectification is also exhibited at large temperature bias. It is found that the intrinsically asymmetric structure of the hybrid system and negative differential thermal conductance show the cooperative contribution. Noise power and skewness, as typical current fluctuations, exhibit global maximum with strong hybridization at small and large temperature bias limits, respectively. Moreover, the effect of photon quadrature squeezing is discovered in the strong hybridization and low-temperature regime, which shows asymmetric response to two bath temperatures. These results would provide some insight to thermal functional design and photon manipulation in qubit-resonator hybrid quantum systems.
Sentence completion (SC) questions present a sentence with one or more blanks that need to be filled in, three to five possible words or phrases as options. SC questions are widely used for students learning English as a Second Language (ESL) and bui lding computational approaches to automatically solve such questions is beneficial to language learners. In this work, we propose a neural framework to solve SC questions in English examinations by utilizing pre-trained language models. We conduct extensive experiments on a real-world K-12 ESL SC question dataset and the results demonstrate the superiority of our model in terms of prediction accuracy. Furthermore, we run precision-recall trade-off analysis to discuss the practical issues when deploying it in real-life scenarios. To encourage reproducible results, we make our code publicly available at url{https://github.com/AIED2021/ESL-SentenceCompletion}.
We derive a series of quantitative bulk-boundary correspondences for 3D bosonic and fermionic symmetry-protected topological (SPT) phases under the assumption that the surface is gapped, symmetric and topologically ordered, i.e., a symmetry-enriched topological (SET) state. We consider those SPT phases that are protected by the mirror symmetry and continuous symmetries that form a group of $U(1)$, $SU(2)$ or $SO(3)$. In particular, the fermionic cases correspond to a crystalline version of 3D topological insulators and topological superconductors in the famous ten-fold-way classification, with the time-reversal symmetry replaced by the mirror symmetry and with strong interaction taken into account. For surface SETs, the most general interplay between symmetries and anyon excitations is considered. Based on the previously proposed dimension reduction and folding approaches, we re-derive the classification of bulk SPT phases and define a emph{complete} set of bulk topological invariants for every symmetry group under consideration, and then derive explicit expressions of the bulk invariants in terms of surface topological properties (such as topological spin, quantum dimension) and symmetry properties (such as mirror fractionalization, fractional charge or spin). These expressions are our quantitative bulk-boundary correspondences. Meanwhile, the bulk topological invariants can be interpreted as emph{anomaly indicators} for the surface SETs which carry t Hooft anomalies of the associated symmetries whenever the bulk is topologically non-trivial. Hence, the quantitative bulk-boundary correspondences provide an easy way to compute the t Hooft anomalies of the surface SETs. Moreover, our anomaly indicators are complete. Our derivations of the bulk-boundary correspondences and anomaly indicators are explicit and physically transparent.
The van der Waals magnets provide an ideal platform to explore quantum magnetism both theoretically and experimentally. We study a classical J1-J2 model with distinct magnetic degrees of freedom on a honeycomb lattice that can be realized in some van der Waals magnets. We find that the model develops a spiral spin liquid (SSL), a massively degenerated state with spiral contours in the reciprocal space, not only for continuous spin vectors, XY and Heisenberg spins but also for Ising spin moments. Surprisingly, the SSL is more robust for the Ising case, and the shape of the spiral contours is pinned to an emergent kagome structure at the low temperatures for different J2. The spin-chirality order for the continuous spins at the finite temperatures is further connected to the electric polarization via the inverse Dzyaloshinski-Moriya mechanism. These results provide a guidance for the experimental realization of 2D SSLs, and the SSL can further be used as the mother state to generate skyrmions that are promising candidates for future memory devices.
The training loss function that enforces certain training sample distribution patterns plays a critical role in building a re-identification (ReID) system. Besides the basic requirement of discrimination, i.e., the features corresponding to different identities should not be mixed, additional intra-class distribution constraints, such as features from the same identities should be close to their centers, have been adopted to construct losses. Despite the advances of various new loss functions, it is still challenging to strike the balance between the need of reducing the intra-class variation and allowing certain distribution freedom. In this paper, we propose a new loss based on center predictivity, that is, a sample must be positioned in a location of the feature space such that from it we can roughly predict the location of the center of same-class samples. The prediction error is then regarded as a loss called Center Prediction Loss (CPL). We show that, without introducing additional hyper-parameters, this new loss leads to a more flexible intra-class distribution constraint while ensuring the between-class samples are well-separated. Extensive experiments on various real-world ReID datasets show that the proposed loss can achieve superior performance and can also be complementary to existing losses.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا