ترغب بنشر مسار تعليمي؟ اضغط هنا

Novel approach to assess the impact of the Fano factor on the sensitivity of low-mass dark matter experiments

68   0   0.0 ( 0 )
 نشر من قبل Daniel Durnford
 تاريخ النشر 2018
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

As first suggested by U. Fano in the 1940s, the statistical fluctuation of the number of pairs produced in an ionizing interaction is known to be sub-Poissonian. The dispersion is reduced by the so-called Fano factor, which empirically encapsulates the correlations in the process of ionization. In modelling the energy response of an ionization measurement device, the effect of the Fano factor is commonly folded into the overall energy resolution. While such an approximate treatment is appropriate when a significant number of ionization pairs are expected to be produced, the Fano factor needs to be accounted for directly at the level of pair creation when only a few are expected. To do so, one needs a discrete probability distribution of the number of pairs created $N$ with independent control of both the expectation $mu$ and Fano factor $F$. Although no distribution $P(N|mu,F)$ with this convenient form exists, we propose the use of the COM-Poisson distribution together with strategies for utilizing it to effectively fulfill this need. We then use this distribution to assess the impact that the Fano factor may have on the sensitivity of low-mass WIMP search experiments.



قيم البحث

اقرأ أيضاً

Two-phase xenon detectors, such as that at the core of the forthcoming LZ dark matter experiment, use photomultiplier tubes to sense the primary (S1) and secondary (S2) scintillation signals resulting from particle interactions in their liquid xenon target. This paper describes a simulation study exploring two techniques to lower the energy threshold of LZ to gain sensitivity to low-mass dark matter and astrophysical neutrinos, which will be applicable to other liquid xenon detectors. The energy threshold is determined by the number of detected S1 photons; typically, these must be recorded in three or more photomultiplier channels to avoid dark count coincidences that mimic real signals. To lower this threshold: a) we take advantage of the double photoelectron emission effect, whereby a single vacuum ultraviolet photon has a $sim20%$ probability of ejecting two photoelectrons from a photomultiplier tube photocathode; and b) we drop the requirement of an S1 signal altogether, and use only the ionization signal, which can be detected more efficiently. For both techniques we develop signal and background models for the nominal exposure, and explore accompanying systematic effects, including the dependence on the free electron lifetime in the liquid xenon. When incorporating double photoelectron signals, we predict a factor of $sim 4$ sensitivity improvement to the dark matter-nucleon scattering cross-section at $2.5$ GeV/c$^2$, and a factor of $sim1.6$ increase in the solar $^8$B neutrino detection rate. Dropping the S1 requirement may allow sensitivity gains of two orders of magnitude in both cases. Finally, we apply these techniques to even lower masses by taking into account the atomic Migdal effect; this could lower the dark matter particle mass threshold to $80$ MeV/c$^2$.
The High Altitude Water Cherenkov (HAWC) observatory is a wide field-of-view detector sensitive to gamma rays of 100 GeV to a few hundred TeV. Located in central Mexico at 19 degrees North latitude and 4100 m above sea level, HAWC will observe gamma rays and cosmic rays with an array of water Cherenkov detectors. The full HAWC array is scheduled to be operational in Spring 2015. In this paper, we study the HAWC sensitivity to the gamma-ray signatures of high-mass (multi- TeV) dark matter annihilation. The HAWC observatory will be sensitive to diverse searches for dark matter annihilation, including annihilation from extended dark matter sources, the diffuse gamma-ray emission from dark matter annihilation, and gamma-ray emission from non-luminous dark matter subhalos. Here we consider the HAWC sensitivity to a subset of these sources, including dwarf galaxies, the M31 galaxy, the Virgo cluster, and the Galactic center. We simulate the HAWC response to gamma rays from these sources in several well-motivated dark matter annihilation channels. If no gamma-ray excess is observed, we show the limits HAWC can place on the dark matter cross-section from these sources. In particular, in the case of dark matter annihilation into gauge bosons, HAWC will be able to detect a narrow range of dark matter masses to cross-sections below thermal. HAWC should also be sensitive to non-thermal cross-sections for masses up to nearly 1000 TeV. The constraints placed by HAWC on the dark matter cross-section from known sources should be competitive with current limits in the mass range where HAWC has similar sensitivity. HAWC can additionally explore higher dark matter masses than are currently constrained.
We apply an empirical, data-driven approach for describing crop yield as a function of monthly temperature and precipitation by employing generative probabilistic models with parameters determined through Bayesian inference. Our approach is applied t o state-scale maize yield and meteorological data for the US Corn Belt from 1981 to 2014 as an exemplar, but would be readily transferable to other crops, locations and spatial scales. Experimentation with a number of models shows that maize growth rates can be characterised by a two-dimensional Gaussian function of temperature and precipitation with monthly contributions accumulated over the growing period. This approach accounts for non-linear growth responses to the individual meteorological variables, and allows for interactions between them. Our models correctly identify that temperature and precipitation have the largest impact on yield in the six months prior to the harvest, in agreement with the typical growing season for US maize (April to September). Maximal growth rates occur for monthly mean temperature 18-19$^circ$C, corresponding to a daily maximum temperature of 24-25$^circ$C (in broad agreement with previous work) and monthly total precipitation 115 mm. Our approach also provides a self-consistent way of investigating climate change impacts on current US maize varieties in the absence of adaptation measures. Keeping precipitation and growing area fixed, a temperature increase of $2^circ$C, relative to 1981-2014, results in the mean yield decreasing by 8%, while the yield variance increases by a factor of around 3. We thus provide a flexible, data-driven framework for exploring the impacts of natural climate variability and climate change on globally significant crops based on their observed behaviour. In concert with other approaches, this can help inform the development of adaptation strategies that will ensure food security under a changing climate.
99 - Ewan Cameron 2010
I present a critical review of techniques for estimating confidence intervals on binomial population proportions inferred from success counts in small-to-intermediate samples. Population proportions arise frequently as quantities of interest in astro nomical research; for instance, in studies aiming to constrain the bar fraction, AGN fraction, SMBH fraction, merger fraction, or red sequence fraction from counts of galaxies exhibiting distinct morphological features or stellar populations. However, two of the most widely-used techniques for estimating binomial confidence intervals--the normal approximation and the Clopper & Pearson approach--are liable to misrepresent the degree of statistical uncertainty present under sampling conditions routinely encountered in astronomical surveys, leading to an ineffective use of the experimental data (and, worse, an inefficient use of the resources expended in obtaining that data). Hence, I provide here an overview of the fundamentals of binomial statistics with two principal aims: (i) to reveal the ease with which (Bayesian) binomial confidence intervals with more satisfactory behaviour may be estimated from the quantiles of the beta distribution using modern mathematical software packages (e.g. R, matlab, mathematica, IDL, python); and (ii) to demonstrate convincingly the major flaws of both the normal approximation and the Clopper & Pearson approach for error estimation.
This paper presents a novel technique for mitigating electrode backgrounds that limit the sensitivity of searches for low-mass dark matter (DM) using xenon time projection chambers. In the LUX detector, signatures of low-mass DM interactions would be very low energy ($sim$keV) scatters in the active target that ionize only a few xenon atoms and seldom produce detectable scintillation signals. In this regime, extra precaution is required to reject a complex set of low-energy electron backgrounds that have long been observed in this class of detector. Noticing backgrounds from the wire grid electrodes near the top and bottom of the active target are particularly pernicious, we develop a machine learning technique based on ionization pulse shape to identify and reject these events. We demonstrate the technique can improve Poisson limits on low-mass DM interactions by a factor of $2$-$7$ with improvement depending heavily on the size of ionization signals. We use the technique on events in an effective $5$ tonne$cdot$day exposure from LUXs 2013 science operation to place strong limits on low-mass DM particles with masses in the range $m_{chi}in0.15$-$10$ GeV. This machine learning technique is expected to be useful for near-future experiments, such as LZ and XENONnT, which hope to perform low-mass DM searches with the stringent background control necessary to make a discovery.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا