ترغب بنشر مسار تعليمي؟ اضغط هنا

A novel concept is presented in this paper for a human mission to the lunar L2 (Lagrange) point that would be a proving ground for future exploration missions to deep space while also overseeing scientifically important investigations. In an L2 halo orbit above the lunar farside, the astronauts aboard the Orion Crew Vehicle would travel 15% farther from Earth than did the Apollo astronauts and spend almost three times longer in deep space. Such a mission would serve as a first step beyond low Earth orbit and prove out operational spaceflight capabilities such as life support, communication, high speed re-entry, and radiation protection prior to more difficult human exploration missions. On this proposed mission, the crew would teleoperate landers and rovers on the unexplored lunar farside, which would obtain samples from the geologically interesting farside and deploy a low radio frequency telescope. Sampling the South Pole-Aitken basin, one of the oldest impact basins in the solar system, is a key science objective of the 2011 Planetary Science Decadal Survey. Observations at low radio frequencies to track the effects of the Universes first stars/galaxies on the intergalactic medium are a priority of the 2010 Astronomy and Astrophysics Decadal Survey. Such telerobotic oversight would also demonstrate capability for human and robotic cooperation on future, more complex deep space missions such as exploring Mars.
115 - Jack Burns 2011
The Lunar University Network for Astrophysics Research (LUNAR) undertakes investigations across the full spectrum of science within the mission of the NASA Lunar Science Institute (NLSI), namely science of, on, and from the Moon. The LUNAR teams work on science of and on the Moon, which is the subject of this white paper, is conducted in the broader context of ascertaining the content, origin, and evolution of the solar system.
Efforts are being made to observe the 21-cm signal from the cosmic dawn using sky-averaged observations with individual radio dipoles. In this paper, we develop a model of the observations accounting for the 21-cm signal, foregrounds, and several maj or instrumental effects. Given this model, we apply Markov Chain Monte Carlo techniques to demonstrate the ability of these instruments to separate the 21-cm signal from foregrounds and quantify their ability to constrain properties of the first galaxies. For concreteness, we investigate observations between 40 and 120 MHz with the proposed DARE mission in lunar orbit, showing its potential for science return.
A concept for a new space-based cosmology mission called the Dark Ages Radio Explore (DARE) is presented in this paper. DAREs science objectives include (1) When did the first stars form? (2) When did the first accreting black holes form? (3) When di d Reionization begin? (4) What surprises does the end of the Dark Ages hold (e.g., Dark Matter decay)? DARE will use the highly-redshifted hyperfine 21-cm transition from neutral hydrogen to track the formation of the first luminous objects by their impact on the intergalactic medium during the end of the Dark Ages and during Cosmic Dawn (redshifts z=11-35). It will measure the sky-averaged spin temperature of neutral hydrogen at the unexplored epoch 80-420 million years after the Big Bang, providing the first evidence of the earliest stars and galaxies to illuminate the cosmos and testing our models of galaxy formation. DAREs approach is to measure the expected spectral features in the sky-averaged, redshifted 21-cm signal over a radio bandpass of 40-120 MHz. DARE orbits the Moon for a mission lifetime of 3 years and takes data above the lunar farside, the only location in the inner solar system proven to be free of human-generated radio frequency interference and any significant ionosphere. The science instrument is composed of a three-element radiometer, including electrically-short, tapered, bi-conical dipole antennas, a receiver, and a digital spectrometer. The smooth frequency response of the antennas and the differential spectral calibration approach using a Markov Chain Monte Carlo technique will be applied to detect the weak cosmic 21-cm signal in the presence of the intense solar system and Galactic foreground emissions.
By volume, more than 99% of the solar system has not been imaged at radio frequencies. Almost all of this space (the solar wind) can be traversed by fast electrons producing radio emissions at frequencies lower than the terrestrial ionospheric cutoff , which prevents observation from the ground. To date, radio astronomy-capable space missions consist of one or a few satellites, typically far from each other, which measure total power from the radio sources, but cannot produce images with useful angular resolution. To produce such images, we require arrays of antennas distributed over many wavelengths (hundreds of meters to kilometers) to permit aperture synthesis imaging. Such arrays could be free-flying arrays of microsatellites or antennas laid out on the lunar surface. In this white paper, we present the lunar option. If such an array were in place by 2020, it would provide context for observations during Solar Probe Plus perihelion passes. Studies of the lunar ionospheres density and time variability are also important goals. This white paper applies to the Solar and Heliospheric Physics study panel.
70 - Geraint Harker 2009
An obstacle to the detection of redshifted 21cm emission from the epoch of reionization (EoR) is the presence of foregrounds which exceed the cosmological signal in intensity by orders of magnitude. We argue that in principle it would be better to fi t the foregrounds non-parametrically - allowing the data to determine their shape - rather than selecting some functional form in advance and then fitting its parameters. Non-parametric fits often suffer from other problems, however. We discuss these before suggesting a non-parametric method, Wp smoothing, which seems to avoid some of them. After outlining the principles of Wp smoothing we describe an algorithm used to implement it. We then apply Wp smoothing to a synthetic data cube for the LOFAR EoR experiment. The performance of Wp smoothing, measured by the extent to which it is able to recover the variance of the cosmological signal and to which it avoids leakage of power from the foregrounds, is compared to that of a parametric fit, and to another non-parametric method (smoothing splines). We find that Wp smoothing is superior to smoothing splines for our application, and is competitive with parametric methods even though in the latter case we may choose the functional form of the fit with advance knowledge of the simulated foregrounds. Finally, we discuss how the quality of the fit is affected by the frequency resolution and range, by the characteristics of the cosmological signal and by edge effects.
Detecting redshifted 21cm emission from neutral hydrogen in the early Universe promises to give direct constraints on the epoch of reionization (EoR). It will, though, be very challenging to extract the cosmological signal (CS) from foregrounds and n oise which are orders of magnitude larger. Fortunately, the signal has some characteristics which differentiate it from the foregrounds and noise, and we suggest that using the correct statistics may tease out signatures of reionization. We generate mock datacubes simulating the output of the Low Frequency Array (LOFAR) EoR experiment. These cubes combine realistic models for Galactic and extragalactic foregrounds and the noise with three different simulations of the CS. We fit out the foregrounds, which are smooth in the frequency direction, to produce residual images in each frequency band. We denoise these images and study the skewness of the one-point distribution in the images as a function of frequency. We find that, under sufficiently optimistic assumptions, we can recover the main features of the redshift evolution of the skewness in the 21cm signal. We argue that some of these features - such as a dip at the onset of reionization, followed by a rise towards its later stages - may be generic, and give us a promising route to a statistical detection of reionization.
96 - Geraint Harker 2007
We generate mock galaxy catalogues for a grid of different cosmologies, using rescaled N-body simulations in tandem with a semi-analytic model run using consistent parameters. Because we predict the galaxy bias, rather than fitting it as a nuisance p arameter, we obtain an almost pure constraint on sigma_8 by comparing the projected two-point correlation function we obtain to that from the SDSS. A systematic error arises because different semi-analytic modelling assumptions allow us to fit the r-band luminosity function equally well. Combining our estimate of the error from this source with the statistical error, we find sigma_8=0.97 +/- 0.06. We obtain consistent results if we use galaxy samples with a different magnitude threshold, or if we select galaxies by b_J-band rather than r-band luminosity and compare to data from the 2dFGRS. Our estimate for sigma_8 is higher than that obtained for other analyses of galaxy data alone, and we attempt to find the source of this difference. We note that in any case, galaxy clustering data provide a very stringent constraint on galaxy formation models.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا