ترغب بنشر مسار تعليمي؟ اضغط هنا

Broad disagreement persists between helioseismological observables and predictions of solar models computed with the latest surface abundances. Here we show that most of these problems can be solved by the presence of asymmetric dark matter coupling to nucleons as the square of the momentum $q$ exchanged in the collision. We compute neutrino fluxes, small frequency separations, surface helium abundances, sound speed profiles and convective zone depths for a number of models, showing more than a $6sigma$ preference for $q^2$ models over others, and over the Standard Solar Model. The preferred mass (3,GeV) and reference dark matter-nucleon cross-section ($10^{-37}$,cm$^2$ at $q_0 = 40$,MeV) are within the region of parameter space allowed by both direct detection and collider searches.
The composition of the Sun is an essential piece of reference data for astronomy, cosmology, astroparticle, space and geo-physics. This article, dealing with the intermediate-mass elements Na to Ca, is the first in a series describing the comprehensi ve re-determination of the solar composition. In this series we severely scrutinise all ingredients of the analysis across all elements, to obtain the most accurate, homogeneous and reliable results possible. We employ a highly realistic 3D hydrodynamic solar photospheric model, which has successfully passed an arsenal of observational diagnostics. To quantify systematic errors, we repeat the analysis with three 1D hydrostatic model atmospheres (MARCS, MISS and Holweger & M{u}ller 1974) and a horizontally and temporally-averaged version of the 3D model ($langle$3D$rangle$). We account for departures from LTE wherever possible. We have scoured the literature for the best transition probabilities, partition functions, hyperfine and other data, and stringently checked all observed profiles for blends. Our final 3D+NLTE abundances are: $logepsilon_{mathrm{Na}}=6.21pm0.04$, $logepsilon_{mathrm{Mg}}=7.59pm0.04$, $logepsilon_{mathrm{Al}}=6.43pm0.04$, $logepsilon_{mathrm{Si}}=7.51pm0.03$, $logepsilon_{mathrm{P}}=5.41pm0.03$, $log epsilon_{mathrm{S}}=7.13pm0.03$, $logepsilon_{mathrm{K}}=5.04pm0.05$ and $logepsilon_{mathrm{Ca}}=6.32pm0.03$. The uncertainties include both statistical and systematic errors. Our results are systematically smaller than most previous ones with the 1D semi-empirical Holweger & Muller model. The $langle$3D$rangle$ model returns abundances very similar to the full 3D calculations. This analysis provides a complete description and a slight update of the Na to Ca results presented in Asplund, Grevesse, Sauval & Scott (arXiv:0909.0948), with full details of all lines and input data.
We analyse the sensitivity of IceCube-DeepCore to annihilation of neutralino dark matter in the solar core, generated within a 25 parameter version of the minimally supersymmetric standard model (MSSM-25). We explore the 25-dimensional parameter spac e using scanning methods based on importance sampling and using DarkSUSY 5.0.6 to calculate observables. Our scans produced a database of 6.02 million parameter space points with neutralino dark matter consistent with the relic density implied by WMAP 7-year data, as well as with accelerator searches. We performed a model exclusion analysis upon these points using the expected capabilities of the IceCube-DeepCore Neutrino Telescope. We show that IceCube-DeepCore will be sensitive to a number of models that are not accessible to direct detection experiments such as SIMPLE, COUPP and XENON100, indirect detection using Fermi-LAT observations of dwarf spheroidal galaxies, nor to current LHC searches.
Weakly interacting massive particles (WIMPs) are amongst the most interesting dark matter (DM) candidates. Many DM candidates naturally arise in theories beyond the standard model (SM) of particle physics, like weak-scale supersymmetry (SUSY). Experi ments aim to detect WIMPs by scattering, annihilation or direct production, and thereby determine the underlying theory to which they belong, along with its parameters. Here we examine the prospects for further constraining the Constrained Minimal Supersymmetric Standard Model (CMSSM) with future ton-scale direct detection experiments. We consider ton-scale extrapolations of three current experiments: CDMS, XENON and COUPP, with 1000 kg-years of raw exposure each. We assume energy resolutions, energy ranges and efficiencies similar to the curre
Models of weak-scale supersymmetry offer viable dark matter (DM) candidates. Their parameter spaces are however rather large and complex, such that pinning down the actual parameter values from experimental data can depend strongly on the employed st atistical framework and scanning algorithm. In frequentist parameter estimation, a central requirement for properly constructed confidence intervals is that they cover true parameter values, preferably at exactly the stated confidence level when experiments are repeated infinitely many times. Since most widely-used scanning techniques are optimised for Bayesian statistics, one needs to assess their abilities in providing correct confidence intervals in terms of the statistical coverage. Here we investigate this for the Constrained Minimal Supersymmetric Standard Model (CMSSM) when only constrained by data from direct searches for dark matter. We construct confidence intervals from one-dimensional profile likelihoods and study the coverage by generating several pseudo-experiments for a few benchmark sets of pseudo-true parameters. We use nested sampling to scan the parameter space and evaluate the coverage for the benchmarks when either flat or logarithmic priors are imposed on gaugino and scalar mass parameters. The sampling algorithm has been used in the configuration usually adopted for exploration of the Bayesian posterior. We observe both under- and over-coverage, which in some cases vary quite dramatically when benchmarks or priors are modified. We show how most of the variation can be explained as the impact of explicit priors as well as sampling effects, where the latter are indirectly imposed by physicality conditions. For comparison, we also evaluate the coverage for Bayesian credible intervals, and observe significant under-coverage in those cases.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا