ترغب بنشر مسار تعليمي؟ اضغط هنا

Mass Estimation without using MET in early LHC data

32   0   0.0 ( 0 )
 نشر من قبل Nick Kersting
 تاريخ النشر 2010
  مجال البحث
والبحث باللغة English




اسأل ChatGPT حول البحث

Many techniques exist to reconstruct New Physics masses from LHC data, though these tend to either require high luminosity O(100) fb^-1, or an accurate measurement of missing transverse energy (MET) which may not be available in the early running of the LHC. Since in popular models such as SUSY a fairly sharp, triangular dilepton invariant mass spectrum can emerge already at low luminosity O(1) fb^-1, a Decay Kinematics (DK) technique can be used on events near the dilepton mass endpoint to estimate squark, slepton, and neutralino masses without relying on MET. With the first 2 fb^-1 of 7 TeV LHC data SPS1a masses can thus be found to 20% or better accuracy, at least several times better than what has been taken to be achievable.

قيم البحث

اقرأ أيضاً

We present a focused study of a predictive unified model whose measurable consequences are immediately relevant to early discovery prospects of supersymmetry at the LHC. ATLAS and CMS have released their analysis with 35~pb$^{-1}$ of data and the mod el class we discuss is consistent with this data. It is shown that with an increase in luminosity the LSP dark matter mass and the gluino mass can be inferred from simple observables such as kinematic edges in leptonic channels and peak values in effective mass distributions. Specifically, we consider cases in which the neutralino is of low mass and where the relic density consistent with WMAP observations arises via the exchange of Higgs bosons in unified supergravity models. The magnitudes of the gaugino masses are sharply limited to focused regions of the parameter space, and in particular the dark matter mass lies in the range $sim (50-65) ~rm GeV$ with an upper bound on the gluino mass of $575~{rm GeV}$, with a typical mass of $450~{rm GeV}$. We find that all model points in this paradigm are discoverable at the LHC at $sqrt s = 7 rm ~TeV$. We determine lower bounds on the entire sparticle spectrum in this model based on existing experimental constraints. In addition, we find the spin-independent cross section for neutralino scattering on nucleons to be generally in the range of $sigma^{rm SI}_{ a p} = 10^{-46 pm 1}~rm cm^2$ with much higher cross sections also possible. Thus direct detection experiments such as CDMS and XENON already constrain some of the allowed parameter space of the low mass gaugino models and further data will provide important cross-checks of the model assumptions in the near future.
We systematically study the modifications in the couplings of the Higgs boson, when identified as a pseudo Nambu-Goldstone boson of a strong sector, in the light of LHC Run 1 and Run 2 data. For the minimal coset SO(5)/SO(4) of the strong sector, we focus on scenarios where the standard model left- and right-handed fermions (specifically, the top and bottom quarks) are either in 5 or in the symmetric 14 representation of SO(5). Going beyond the minimal 5L-5R representation, to what we call here the extended models, we observe that it is possible to construct more than one invariant in the Yukawa sector. In such models, the Yukawa couplings of the 125 GeV Higgs boson undergo nontrivial modifications. The pattern of such modifications can be encoded in a generic phenomenological Lagrangian which applies to a wide class of such models. We show that the presence of more than one Yukawa invariant allows the gauge and Yukawa coupling modifiers to be decorrelated in the extended models, and this decorrelation leads to a relaxation of the bound on the compositeness scale (f > 640 GeV at 95% CL, as compared to f > 1 TeV for the minimal 5L-5R representation model). We also study the Yukawa coupling modifications in the context of the next-to-minimal strong sector coset SO(6)/SO(5) for fermion-embedding up to representations of dimension 20. While quantifying our observations, we have performed a detailed chi-square fit using the ATLAS and CMS combined Run 1 and available Run 2 data.
Quality estimation aims to measure the quality of translated content without access to a reference translation. This is crucial for machine translation systems in real-world scenarios where high-quality translation is needed. While many approaches ex ist for quality estimation, they are based on supervised machine learning requiring costly human labelled data. As an alternative, we propose a technique that does not rely on examples from human-annotators and instead uses synthetic training data. We train off-the-shelf architectures for supervised quality estimation on our synthetic data and show that the resulting models achieve comparable performance to models trained on human-annotated data, both for sentence and word-level prediction.
We consider a specific class of events of the SUSY particle production at the LHC without missing p_T. Namely, we discuss the chargino pair production with a further decay into the W-boson and the neutralino when the masses of the chargino and neutra lino differ by 80-90 GeV. In this case, in the final state one has two Ws and missing E_T but no missing P_T. The produced neutralinos are just boosted along Ws. For a demonstration we consider the MSSM with non-universal gaugino masses. In this case, such events are quite probable in the region of parameter space where the lightest chargino and neutralino are mostly gauginos. The excess in the W production cross-section reach about 10 % over the Standard Model background. We demonstrate that the LHC experiments, which presently measure the WW production cross section at the 8 % level can probe chargino mass around 110 GeV within the suggested scenario, which is not accessible via other searches. If the precision of WW cross section measurement at the LHC will achieve the 3 % level, then it would probe chargino masses up to about 150 GeV within the no missing P_T scenario.
This paper focuses on the quantum amplitude estimation algorithm, which is a core subroutine in quantum computation for various applications. The conventional approach for amplitude estimation is to use the phase estimation algorithm, which consists of many controlled amplification operations followed by a quantum Fourier transform. However, the whole procedure is hard to implement with current and near-term quantum computers. In this paper, we propose a quantum amplitude estimation algorithm without the use of expensive controlled operations; the key idea is to utilize the maximum likelihood estimation based on the combined measurement data produced from quantum circuits with different numbers of amplitude amplification operations. Numerical simulations we conducted demonstrate that our algorithm asymptotically achieves nearly the optimal quantum speedup with a reasonable circuit length.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا