Do you want to publish a course? Click here

Mitigation of LEO Satellite Brightness and Trail Effects on the Rubin Observatory LSST

116   0   0.0 ( 0 )
 Added by Meredith Rawls
 Publication date 2020
  fields Physics
and research's language is English




Ask ChatGPT about the research

We report studies on the mitigation of optical effects of bright low-Earth-orbit (LEO) satellites on Vera C. Rubin Observatory and its Legacy Survey of Space and Time (LSST). These include options for pointing the telescope to avoid satellites, laboratory investigations of bright trails on the Rubin Observatory LSST camera sensors, algorithms for correcting image artifacts caused by bright trails, experiments on darkening SpaceX Starlink satellites, and ground-based follow-up observations. The original Starlink v0.9 satellites are g ~ 4.5 mag, and the initial experiment DarkSat is g ~ 6.1 mag. Future Starlink darkening plans may reach g ~ 7 mag, a brightness level that enables nonlinear image artifact correction to well below background noise. However, the satellite trails will still exist at a signal-to-noise ratio ~ 100, generating systematic errors that may impact data analysis and limit some science. For the Rubin Observatory 8.4-m mirror and a satellite at 550 km, the full width at half maximum of the trail is about 3 as the result of an out-of-focus effect, which helps avoid saturation by decreasing the peak surface brightness of the trail. For 48,000 LEOsats of apparent magnitude 4.5, about 1% of pixels in LSST nautical twilight images would need to be masked.

rate research

Read More

The past few decades have seen the burgeoning of wide field, high cadence surveys, the most formidable of which will be the Legacy Survey of Space and Time (LSST) to be conducted by the Vera C. Rubin Observatory. So new is the field of systematic time-domain survey astronomy, however, that major scientific insights will continue to be obtained using smaller, more flexible systems than the LSST. One such example is the Gravitational-wave Optical Transient Observer (GOTO), whose primary science objective is the optical follow-up of Gravitational Wave events. The amount and rate of data production by GOTO and other wide-area, high-cadence surveys presents a significant challenge to data processing pipelines which need to operate in near real-time to fully exploit the time-domain. In this study, we adapt the Rubin Observatory LSST Science Pipelines to process GOTO data, thereby exploring the feasibility of using this off-the-shelf pipeline to process data from other wide-area, high-cadence surveys. In this paper, we describe how we use the LSST Science Pipelines to process raw GOTO frames to ultimately produce calibrated coadded images and photometric source catalogues. After comparing the measured astrometry and photometry to those of matched sources from PanSTARRS DR1, we find that measured source positions are typically accurate to sub-pixel levels, and that measured L-band photometries are accurate to $sim50$ mmag at $m_Lsim16$ and $sim200$ mmag at $m_Lsim18$. These values compare favourably to those obtained using GOTOs primary, in-house pipeline, GOTOPHOTO, in spite of both pipelines having undergone further development and improvement beyond the implementations used in this study. Finally, we release a generic obs package that others can build-upon should they wish to use the LSST Science Pipelines to process data from other facilities.
We have adapted the Vera C. Rubin Observatory Legacy Survey of Space and Time (LSST) Science Pipelines to process data from the Gravitational-Wave Optical Transient Observer (GOTO) prototype. In this paper, we describe how we used the Rubin Observatory LSST Science Pipelines to conduct forced photometry measurements on nightly GOTO data. By comparing the photometry measurements of sources taken on multiple nights, we find that the precision of our photometry is typically better than 20~mmag for sources brighter than 16 mag. We also compare our photometry measurements against colour-corrected PanSTARRS photometry, and find that the two agree to within 10~mmag (1$sigma$) for bright (i.e., $sim14^{rm th}$~mag) sources to 200~mmag for faint (i.e., $sim18^{rm th}$~mag) sources. Additionally, we compare our results to those obtained by GOTOs own in-house pipeline, {sc gotophoto}, and obtain similar results. Based on repeatability measurements, we measure a $5sigma$ L-band survey depth of between 19 and 20 magnitudes, depending on observing conditions. We assess, using repeated observations of non-varying standard SDSS stars, the accuracy of our uncertainties, which we find are typically overestimated by roughly a factor of two for bright sources (i.e., $<15^{rm th}$~mag), but slightly underestimated (by roughly a factor of 1.25) for fainter sources ($>17^{rm th}$~mag). Finally, we present lightcurves for a selection of variable sources, and compare them to those obtained with the Zwicky Transient Factory and GAIA. Despite the Rubin Observatory LSST Science Pipelines still undergoing active development, our results show that they are already delivering robust forced photometry measurements from GOTO data.
The 8.4m Vera Rubin Observatory Legacy Survey of Space and Time (LSST) will start a ten-year survey of the southern hemisphere sky in 2023. LSST will revolutionise low surface brightness astronomy. It will transform our understanding of galaxy evolution, through the study of low surface brightness features around galaxies (faint shells, tidal tails, halos and stellar streams), discovery of low surface brightness galaxies and the first set of statistical measurements of the intracluster light over a significant range of cluster masses and redshifts.
Perhaps the most exciting promise of the Rubin Observatory Legacy Survey of Space and Time (LSST) is its capability to discover phenomena never before seen or predicted from theory: true astrophysical novelties, but the ability of LSST to make these discoveries will depend on the survey strategy. Evaluating candidate strategies for true novelties is a challenge both practically and conceptually: unlike traditional astrophysical tracers like supernovae or exoplanets, for anomalous objects the template signal is by definition unknown. We present our approach to solve this problem, by assessing survey completeness in a phase space defined by object color, flux (and their evolution), and considering the volume explored by integrating metrics within this space with the observation depth, survey footprint, and stellar density. With these metrics, we explore recent simulations of the Rubin LSST observing strategy across the entire observed footprint and in specific regions in the Local Volume: the Galactic Plane and Magellanic Clouds. Under our metrics, observing strategies with greater diversity of exposures and time gaps tend to be more sensitive to genuinely new phenomena, particularly over time-gap ranges left relatively unexplored by previous surveys. To assist the community, we have made all the tools developed publicly available. Extension of the scheme to include proper motions and the detection of associations or populations of interest, will be communicated in paper II of this series. This paper was written with the support of the Vera C. Rubin LSST Transients and Variable Stars and Stars, Milky Way, Local Volume Science Collaborations.
Many scientific investigations of photometric galaxy surveys require redshift estimates, whose uncertainty properties are best encapsulated by photometric redshift (photo-z) posterior probability density functions (PDFs). A plethora of photo-z PDF estimation methodologies abound, producing discrepant results with no consensus on a preferred approach. We present the results of a comprehensive experiment comparing twelve photo-z algorithms applied to mock data produced for The Rubin Observatory Legacy Survey of Space and Time (LSST) Dark Energy Science Collaboration (DESC). By supplying perfect prior information, in the form of the complete template library and a representative training set as inputs to each code, we demonstrate the impact of the assumptions underlying each technique on the output photo-z PDFs. In the absence of a notion of true, unbiased photo-z PDFs, we evaluate and interpret multiple metrics of the ensemble properties of the derived photo-z PDFs as well as traditional reductions to photo-z point estimates. We report systematic biases and overall over/under-breadth of the photo-z PDFs of many popular codes, which may indicate avenues for improvement in the algorithms or implementations. Furthermore, we raise attention to the limitations of established metrics for assessing photo-z PDF accuracy; though we identify the conditional density estimate (CDE) loss as a promising metric of photo-z PDF performance in the case where true redshifts are available but true photo-z PDFs are not, we emphasize the need for science-specific performancemetrics.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا