ترغب بنشر مسار تعليمي؟ اضغط هنا

Weakly interacting massive particles (WIMPs) are amongst the most interesting dark matter (DM) candidates. Many DM candidates naturally arise in theories beyond the standard model (SM) of particle physics, like weak-scale supersymmetry (SUSY). Experi ments aim to detect WIMPs by scattering, annihilation or direct production, and thereby determine the underlying theory to which they belong, along with its parameters. Here we examine the prospects for further constraining the Constrained Minimal Supersymmetric Standard Model (CMSSM) with future ton-scale direct detection experiments. We consider ton-scale extrapolations of three current experiments: CDMS, XENON and COUPP, with 1000 kg-years of raw exposure each. We assume energy resolutions, energy ranges and efficiencies similar to the curre
Models of weak-scale supersymmetry offer viable dark matter (DM) candidates. Their parameter spaces are however rather large and complex, such that pinning down the actual parameter values from experimental data can depend strongly on the employed st atistical framework and scanning algorithm. In frequentist parameter estimation, a central requirement for properly constructed confidence intervals is that they cover true parameter values, preferably at exactly the stated confidence level when experiments are repeated infinitely many times. Since most widely-used scanning techniques are optimised for Bayesian statistics, one needs to assess their abilities in providing correct confidence intervals in terms of the statistical coverage. Here we investigate this for the Constrained Minimal Supersymmetric Standard Model (CMSSM) when only constrained by data from direct searches for dark matter. We construct confidence intervals from one-dimensional profile likelihoods and study the coverage by generating several pseudo-experiments for a few benchmark sets of pseudo-true parameters. We use nested sampling to scan the parameter space and evaluate the coverage for the benchmarks when either flat or logarithmic priors are imposed on gaugino and scalar mass parameters. The sampling algorithm has been used in the configuration usually adopted for exploration of the Bayesian posterior. We observe both under- and over-coverage, which in some cases vary quite dramatically when benchmarks or priors are modified. We show how most of the variation can be explained as the impact of explicit priors as well as sampling effects, where the latter are indirectly imposed by physicality conditions. For comparison, we also evaluate the coverage for Bayesian credible intervals, and observe significant under-coverage in those cases.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا