Do you want to publish a course? Click here

Exoplanet Imaging Data Challenge: benchmarking the various image processing methods for exoplanet detection

349   0   0.0 ( 0 )
 Publication date 2021
  fields Physics
and research's language is English




Ask ChatGPT about the research

The Exoplanet Imaging Data Challenge is a community-wide effort meant to offer a platform for a fair and common comparison of image processing methods designed for exoplanet direct detection. For this purpose, it gathers on a dedicated repository (Zenodo), data from several high-contrast ground-based instruments worldwide in which we injected synthetic planetary signals. The data challenge is hosted on the CodaLab competition platform, where participants can upload their results. The specifications of the data challenge are published on our website. The first phase, launched on the 1st of September 2019 and closed on the 1st of October 2020, consisted in detecting point sources in two types of common data-set in the field of high-contrast imaging: data taken in pupil-tracking mode at one wavelength (subchallenge 1, also referred to as ADI) and multispectral data taken in pupil-tracking mode (subchallenge 2, also referred to as ADI mSDI). In this paper, we describe the approach, organisational lessons-learnt and current limitations of the data challenge, as well as preliminary results of the participants submissions for this first phase. In the future, we plan to provide permanent access to the standard library of data sets and metrics, in order to guide the validation and support the publications of innovative image processing algorithms dedicated to high-contrast imaging of planetary systems.



rate research

Read More

The Gemini Planet Imager Exoplanet Survey (GPIES) is a multi-year direct imaging survey of 600 stars to discover and characterize young Jovian exoplanets and their environments. We have developed an automated data architecture to process and index all data related to the survey uniformly. An automated and flexible data processing framework, which we term the Data Cruncher, combines multiple data reduction pipelines together to process all spectroscopic, polarimetric, and calibration data taken with GPIES. With no human intervention, fully reduced and calibrated data products are available less than an hour after the data are taken to expedite follow-up on potential objects of interest. The Data Cruncher can run on a supercomputer to reprocess all GPIES data in a single day as improvements are made to our data reduction pipelines. A backend MySQL database indexes all files, which are synced to the cloud, and a front-end web server allows for easy browsing of all files associated with GPIES. To help observers, quicklook displays show reduced data as they are processed in real-time, and chatbots on Slack post observing information as well as reduced data products. Together, the GPIES automated data processing architecture reduces our workload, provides real-time data reduction, optimizes our observing strategy, and maintains a homogeneously reduced dataset to study planet occurrence and instrument performance.
70 - Hannu Parviainen 2017
Exoplanet research is carried out at the limits of the capabilities of current telescopes and instruments. The studied signals are weak, and often embedded in complex systematics from instrumental, telluric, and astrophysical sources. Combining repeated observations of periodic events, simultaneous observations with multiple telescopes, different observation techniques, and existing information from theory and prior research can help to disentangle the systematics from the planetary signals, and offers synergistic advantages over analysing observations separately. Bayesian inference provides a self-consistent statistical framework that addresses both the necessity for complex systematics models, and the need to combine prior information and heterogeneous observations. This chapter offers a brief introduction to Bayesian inference in the context of exoplanet research, with focus on time series analysis, and finishes with an overview of a set of freely available programming libraries.
539 - R.L. Akeson , X. Chen , D. Ciardi 2013
We describe the contents and functionality of the NASA Exoplanet Archive, a database and tool set funded by NASA to support astronomers in the exoplanet community. The current content of the database includes interactive tables containing properties of all published exoplanets, Kepler planet candidates, threshold-crossing events, data validation reports and target stellar parameters, light curves from the Kepler and CoRoT missions and from several ground-based surveys, and spectra and radial velocity measurements from the literature. Tools provided to work with these data include a transit ephemeris predictor, both for single planets and for observing locations, light curve viewing and normalization utilities, and a periodogram and phased light curve service. The archive can be accessed at http://exoplanetarchive.ipac.caltech.edu.
The direct detection of exoplanets with high-contrast instruments can be boosted with high spectral resolution. For integral field spectrographs yielding hyperspectral data, this means that the field of view consists of diffracted starlight spectra and a spatially localized planet. Analysis usually relies on cross-correlation with theoretical spectra. In a purely blind-search context, this supervised strategy can be biased with model mismatch and/or be computationally inefficient. Using an approach that is inspired by the remote-sensing community, we aim to propose an alternative to cross-correlation that is fully data-driven, which decomposes the data into a set of individual spectra and their corresponding spatial distributions. This strategy is called spectral unmixing. We used an orthogonal subspace projection to identify the most distinct spectra in the field of view. Their spatial distribution maps were then obtained by inverting the data. These spectra were then used to break the original hyperspectral images into their corresponding spatial distribution maps via non-negative least squares. The performance of our method was evaluated and compared with a cross-correlation using simulated hyperspectral data with medium resolution from the ELT/HARMONI integral field spectrograph. We show that spectral unmixing effectively leads to a planet detection solely based on spectral dissimilarities at significantly reduced computational cost. The extracted spectrum holds significant signatures of the planet while being not perfectly separated from residual starlight. The sensitivity of the supervised cross-correlation is three to four times higher than with unsupervised spectral unmixing, the gap is biased toward the former because the injected and correlated spectrum match perfectly. The algorithm was furthermore vetted on real data obtained with VLT/SINFONI of the beta Pictoris system.
Starshade in formation flight with a space telescope is a rapidly maturing technology that would enable imaging and spectral characterization of small planets orbiting nearby stars in the not-too-distant future. While performance models of the starshade-assisted exoplanet imaging have been developed and used to design future missions, their results have not been verified from the analyses of synthetic images. Following a rich history of using community data challenges to evaluate image-processing capabilities in astronomy and exoplanet fields, the Starshade Technology Development to TRL5 (S5), a focused technology development activity managed by the NASA Exoplanet Exploration Program, is organizing and implementing a starshade exoplanet data challenge. The purpose of the data challenge is to validate the flow down of requirements from science to key instrument performance parameters and to quantify the required accuracy of noisy background calibration with synthetic images. This data challenge distinguishes itself from past efforts in the exoplanet field in that (1) it focuses on the detection and spectral characterization of small planets in the habitable zones of nearby stars, and (2) it develops synthetic images that simultaneously include multiple background noise terms -- some specific to starshade observations -- including residual starlight, solar glint, exozodiacal light, detector noise, as well as variability resulting from starshades motion and telescope jitter. In this paper, we provide an overview of the design and rationale of the data challenge. Working with data challenge participants, we expect to achieve improved understanding of the noise budget and background calibration in starshade-assisted exoplanet observations in the context of both Starshade Rendezvous with Roman and HabEx.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا