ترغب بنشر مسار تعليمي؟ اضغط هنا

Extending the Standard Model (SM) by a $U(1)_{L_mu-L_tau}$ group gives potentially significant new contributions to $g_mu-2$, allows the construction of realistic neutrino mass matrices, incorporates violation of lepton universality violation, and of fers an anomaly-free mediator for a Dark Matter (DM) sector. In a recent analysis we showed that published LHC searches are not very sensitive to this model. Here we apply several Machine Learning (ML) algorithms in order to distinguish this model from the SM using simulated LHC data. In particular, we optimize the $3mu$-signal, which has a considerably larger cross section than the $4mu$-signal. Furthermore, since the $2$-muon plus missing $E_T$ final state gets contributions from diagrams involving DM particles, we optimize it as well. We find greatly improved sensitivity, which already for $36$ fb$^{-1}$ of data exceeds the combination of published LHC and non-LHC results. We also emphasize the usefulness of Boosted Decision Trees which, unlike Neural Networks, easily allow to extract additional information from the data which directly connect to the theoretical model. The same scheme could be used to analyze other models.
In the fourth paper of this series, we present the metallicity-dependent Sloan Digital Sky Survey (SDSS) stellar color loci of red giant stars, using a spectroscopic sample of red giants in the SDSS Stripe 82 region. The stars span a range of 0.55 -- 1.2 mag in color g-i, -0.3 -- -2.5 in metallicity [Fe/H], and have values of surface gravity log g smaller than 3.5 dex. As in the case of main-sequence (MS) stars, the intrinsic widths of loci of red giants are also found to be quite narrow, a few mmag at maximum. There are however systematic differences between the metallicity-dependent stellar loci of red giants and MS stars. The colors of red giants are less sensitive to metallicity than those of MS stars. With good photometry, photometric metallicities of red giants can be reliably determined by fitting the u-g, g-r, r-i, and i-z colors simultaneously to an accuracy of 0.2 -- 0.25 dex, comparable to the precision achievable with low-resolution spectroscopy for a signal-to-noise ratio of 10. By comparing fitting results to the stellar loci of red giants and MS stars, we propose a new technique to discriminate between red giants and MS stars based on the SDSS photometry. The technique achieves completeness of ~ 70 per cent and efficiency of ~ 80 per cent in selecting metal-poor red giant stars of [Fe/H] $le$ -1.2. It thus provides an important tool to probe the structure and assemblage history of the Galactic halo using red giant stars.
We present a novel weakly-supervised framework for classifying whole slide images (WSIs). WSIs, due to their gigapixel resolution, are commonly processed by patch-wise classification with patch-level labels. However, patch-level labels require precis e annotations, which is expensive and usually unavailable on clinical data. With image-level labels only, patch-wise classification would be sub-optimal due to inconsistency between the patch appearance and image-level label. To address this issue, we posit that WSI analysis can be effectively conducted by integrating information at both high magnification (local) and low magnification (regional) levels. We auto-encode the visual signals in each patch into a latent embedding vector representing local information, and down-sample the raw WSI to hardware-acceptable thumbnails representing regional information. The WSI label is then predicted with a Dual-Stream Network (DSNet), which takes the transformed local patch embeddings and multi-scale thumbnail images as inputs and can be trained by the image-level label only. Experiments conducted on two large-scale public datasets demonstrate that our method outperforms all recent state-of-the-art weakly-supervised WSI classification methods.
We study the problem of training named entity recognition (NER) models using only distantly-labeled data, which can be automatically obtained by matching entity mentions in the raw text with entity types in a knowledge base. The biggest challenge of distantly-supervised NER is that the distant supervision may induce incomplete and noisy labels, rendering the straightforward application of supervised learning ineffective. In this paper, we propose (1) a noise-robust learning scheme comprised of a new loss function and a noisy label removal step, for training NER models on distantly-labeled data, and (2) a self-training method that uses contextualized augmentations created by pre-trained language models to improve the generalization ability of the NER model. On three benchmark datasets, our method achieves superior performance, outperforming existing distantly-supervised NER models by significant margins.
Robot localization remains a challenging task in GPS denied environments. State estimation approaches based on local sensors, e.g. cameras or IMUs, are drifting-prone for long-range missions as error accumulates. In this study, we aim to address this problem by localizing image observations in a 2D multi-modal geospatial map. We introduce the cross-scale dataset and a methodology to produce additional data from cross-modality sources. We propose a framework that learns cross-scale visual representations without supervision. Experiments are conducted on data from two different domains, underwater and aerial. In contrast to existing studies in cross-view image geo-localization, our approach a) performs better on smaller-scale multi-modal maps; b) is more computationally efficient for real-time applications; c) can serve directly in concert with state estimation pipelines.
74 - Victor Lekeu , Yi Zhang 2021
We perform the quantisation of antisymmetric tensor-spinors (fermionic $p$-forms) $psi^alpha_{mu_1 dots mu_p}$ using the Batalin-Vilkovisky field-antifield formalism. Just as for the gravitino ($p=1$), an extra propagating Nielsen-Kallosh ghost appea rs in quadratic gauges containing a differential operator. The appearance of this `third ghost is described within the BV formalism for arbitrary reducible gauge theories. We then use the resulting spectrum of ghosts and the Atiyah-Singer index theorem to compute gravitational anomalies.
Traditional event extraction methods require predefined event types and their corresponding annotations to learn event extractors. These prerequisites are often hard to be satisfied in real-world applications. This work presents a corpus-based open-d omain event type induction method that automatically discovers a set of event types from a given corpus. As events of the same type could be expressed in multiple ways, we propose to represent each event type as a cluster of <predicate sense, object head> pairs. Specifically, our method (1) selects salient predicates and object heads, (2) disambiguates predicate senses using only a verb sense dictionary, and (3) obtains event types by jointly embedding and clustering <predicate sense, object head> pairs in a latent spherical space. Our experiments, on three datasets from different domains, show our method can discover salient and high-quality event types, according to both automatic and human evaluations.
LDCT has drawn major attention in the medical imaging field due to the potential health risks of CT-associated X-ray radiation to patients. Reducing the radiation dose, however, decreases the quality of the reconstructed images, which consequently co mpromises the diagnostic performance. Various deep learning techniques have been introduced to improve the image quality of LDCT images through denoising. GANs-based denoising methods usually leverage an additional classification network, i.e. discriminator, to learn the most discriminate difference between the denoised and normal-dose images and, hence, regularize the denoising model accordingly; it often focuses either on the global structure or local details. To better regularize the LDCT denoising model, this paper proposes a novel method, termed DU-GAN, which leverages U-Net based discriminators in the GANs framework to learn both global and local difference between the denoised and normal-dose images in both image and gradient domains. The merit of such a U-Net based discriminator is that it can not only provide the per-pixel feedback to the denoising network through the outputs of the U-Net but also focus on the global structure in a semantic level through the middle layer of the U-Net. In addition to the adversarial training in the image domain, we also apply another U-Net based discriminator in the image gradient domain to alleviate the artifacts caused by photon starvation and enhance the edge of the denoised CT images. Furthermore, the CutMix technique enables the per-pixel outputs of the U-Net based discriminator to provide radiologists with a confidence map to visualize the uncertainty of the denoised results, facilitating the LDCT-based screening and diagnosis. Extensive experiments on the simulated and real-world datasets demonstrate superior performance over recently published methods both qualitatively and quantitatively.
258 - Ling Chen , Yi Zhang , Sirou Zhu 2021
Unsupervised user adaptation aligns the feature distributions of the data from training users and the new user, so a well-trained wearable human activity recognition (WHAR) model can be well adapted to the new user. With the development of wearable s ensors, multiple wearable sensors based WHAR is gaining more and more attention. In order to address the challenge that the transferabilities of different sensors are different, we propose SALIENCE (unsupervised user adaptation model for multiple wearable sensors based human activity recognition) model. It aligns the data of each sensor separately to achieve local alignment, while uniformly aligning the data of all sensors to ensure global alignment. In addition, an attention mechanism is proposed to focus the activity classifier of SALIENCE on the sensors with strong feature discrimination and well distribution alignment. Experiments are conducted on two public WHAR datasets, and the experimental results show that our model can yield a competitive performance.
We consider a modified gravity framework for inflation by adding to the Einstein-Hilbert action a direct $f(phi)T$ term, where $phi$ is identified as the inflaton and $T$ is the trace of the energy-momentum tensor. The framework goes to Einstein grav ity naturally when inflaton decays out. We investigate inflation dynamics in this $f(phi)T$ gravity (not to be confused with torsion-scalar coupled theories) on a general basis, and then apply it to three well-motivated inflationary models. We find that the predictions for the spectral tilt and the tensor-to-scalar ratio are sensitive to this new $f(phi)T$ term. This $f(phi)T$ gravity brings both chaotic and natural inflation into better agreement with data. For Starobinsky inflation, the coupling constant $alpha$ in $[-0.0026,0.0031]$ for $N=60$ is in Planck-allowed $2sigma$ region.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا