ترغب بنشر مسار تعليمي؟ اضغط هنا

176 - Matthew Malloy , Adam Lidz 2014
Observations of the Lyman-alpha (Ly-$alpha$) forest may allow reionization to complete as late as $z sim 5.5$, provided the ionization state of the intergalactic medium (IGM) is sufficiently inhomogeneous at these redshifts. In this case, significant ly neutral islands may remain amongst highly ionized gas with the ionized regions allowing some transmission through the Ly-$alpha$ forest. This possibility has the important virtue that it is eminently testable with existing Ly-$alpha$ forest data. In particular, we describe three observable signatures of significantly neutral gas in the $z sim 5.5$ IGM. We use mock quasar spectra produced from numerical simulations of reionization to develop these tests. First, we quantify how the abundance and length of absorbed regions in the forest increase with the volume-averaged neutral fraction in our reionization model. Second, we consider stacking the transmission profile around highly absorbed regions in the forest. If and only if there is significantly neutral gas in the IGM, absorption in the damping wing of the Ly-$alpha$ line will cause the transmission to recover slowly as one moves from absorbed to transmitted portions of the spectrum. Third, the deuterium Ly-$beta$ line should imprint a small but distinctive absorption feature slightly blueward of absorbed neutral regions in the Ly-$beta$ forest. We show that these tests can be carried out with existing Keck HIRES spectra at $z sim 5.5$, with the damping wing being observable for $< x_{text{HI}} >gtrsim 0.05$ and the deuterium feature observable with additional high-resolution spectra for $< x_{text{HI}} >gtrsim 0.2$.
The temperature of the low-density intergalactic medium (IGM) at high redshift is sensitive to the timing and nature of hydrogen and HeII reionization, and can be measured from Lyman-alpha forest absorption spectra. Since the memory of intergalactic gas to heating during reionization gradually fades, measurements as close as possible to reionization are desirable. In addition, measuring the IGM temperature at sufficiently high redshifts should help to isolate the effects of hydrogen reionization since HeII reionization starts later, at lower redshift. Motivated by this, we model the IGM temperature at z>5 using semi-numeric models of patchy reionization. We construct mock Lyman-alpha forest spectra from these models and consider their observable implications. We find that the small-scale structure in the Lyman-alpha forest is sensitive to the temperature of the IGM even at redshifts where the average absorption in the forest is as high as 90%. We forecast the accuracy at which the z~5 IGM temperature can be measured using existing samples of high resolution quasar spectra, and find that interesting constraints are possible. For example, an early reionization model in which reionization ends at z~10 should be distinguishable -- at high statistical significance -- from a lower redshift model where reionization completes at z~6. We discuss improvements to our modeling that may be required to robustly interpret future measurements.
The paper proposes a novel upper confidence bound (UCB) procedure for identifying the arm with the largest mean in a multi-armed bandit game in the fixed confidence setting using a small number of total samples. The procedure cannot be improved in th e sense that the number of samples required to identify the best arm is within a constant factor of a lower bound based on the law of the iterated logarithm (LIL). Inspired by the LIL, we construct our confidence bounds to explicitly account for the infinite time horizon of the algorithm. In addition, by using a novel stopping time for the algorithm we avoid a union bound over the arms that has been observed in other UCB-type algorithms. We prove that the algorithm is optimal up to constants and also show through simulations that it provides superior performance with respect to the state-of-the-art.
Sampling from distributions to find the one with the largest mean arises in a broad range of applications, and it can be mathematically modeled as a multi-armed bandit problem in which each distribution is associated with an arm. This paper studies t he sample complexity of identifying the best arm (largest mean) in a multi-armed bandit problem. Motivated by large-scale applications, we are especially interested in identifying situations where the total number of samples that are necessary and sufficient to find the best arm scale linearly with the number of arms. We present a single-parameter multi-armed bandit model that spans the range from linear to superlinear sample complexity. We also give a new algorithm for best arm identification, called PRISM, with linear sample complexity for a wide range of mean distributions. The algorithm, like most exploration procedures for multi-armed bandits, is adaptive in the sense that the next arms to sample are selected based on previous samples. We compare the sample complexity of adaptive procedures with simpler non-adaptive procedures using new lower bounds. For many problem instances, the increased sample complexity required by non-adaptive procedures is a polynomial factor of the number of arms.
159 - Matthew Malloy , Adam Lidz 2012
One of the most promising approaches for studying reionization is to use the redshifted 21 cm line. Early generations of redshifted 21 cm surveys will not, however, have the sensitivity to make detailed maps of the reionization process, and will inst ead focus on statistical measurements. Here we show that it may nonetheless be possible to {em directly identify ionized regions} in upcoming data sets by applying suitable filters to the noisy data. The locations of prominent minima in the filtered data correspond well with the positions of ionized regions. In particular, we corrupt semi-numeric simulations of the redshifted 21 cm signal during reionization with thermal noise at the level expected for a 500 antenna tile version of the Murchison Widefield Array (MWA), and mimic the degrading effects of foreground cleaning. Using a matched filter technique, we find that the MWA should be able to directly identify ionized regions despite the large thermal noise. In a plausible fiducial model in which ~20% of the volume of the Universe is neutral at z ~ 7, we find that a 500-tile MWA may directly identify as many as ~150 ionized regions in a 6 MHz portion of its survey volume and roughly determine the size of each of these regions. This may, in turn, allow interesting multi-wavelength follow-up observations, comparing galaxy properties inside and outside of ionized regions. We discuss how the optimal configuration of radio antenna tiles for detecting ionized regions with a matched filter technique differs from the optimal design for measuring power spectra. These considerations have potentially important implications for the design of future redshifted 21 cm surveys.
This paper presents results pertaining to sequential methods for support recovery of sparse signals in noise. Specifically, we show that any sequential measurement procedure fails provided the average number of measurements per dimension grows slower then log s / D(f0||f1) where s is the level of sparsity, and D(f0||f1) the Kullback-Leibler divergence between the underlying distributions. For comparison, we show any non-sequential procedure fails provided the number of measurements grows at a rate less than log n / D(f1||f0), where n is the total dimension of the problem. Lastly, we show that a simple procedure termed sequential thresholding guarantees exact support recovery provided the average number of measurements per dimension grows faster than (log s + log log n) / D(f0||f1), a mere additive factor more than the lower bound.
This paper studies the problem of high-dimensional multiple testing and sparse recovery from the perspective of sequential analysis. In this setting, the probability of error is a function of the dimension of the problem. A simple sequential testing procedure is proposed. We derive necessary conditions for reliable recovery in the non-sequential setting and contrast them with sufficient conditions for reliable recovery using the proposed sequential testing procedure. Applications of the main results to several commonly encountered models show that sequential testing can be exponentially more sensitive to the difference between the null and alternative distributions (in terms of the dependence on dimension), implying that subtle cases can be much more reliably determined using sequential methods.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا