ترغب بنشر مسار تعليمي؟ اضغط هنا

The NASA Exoplanet Archive: Data and Tools for Exoplanet Research

546   0   0.0 ( 0 )
 نشر من قبل Rachel Akeson
 تاريخ النشر 2013
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

We describe the contents and functionality of the NASA Exoplanet Archive, a database and tool set funded by NASA to support astronomers in the exoplanet community. The current content of the database includes interactive tables containing properties of all published exoplanets, Kepler planet candidates, threshold-crossing events, data validation reports and target stellar parameters, light curves from the Kepler and CoRoT missions and from several ground-based surveys, and spectra and radial velocity measurements from the literature. Tools provided to work with these data include a transit ephemeris predictor, both for single planets and for observing locations, light curve viewing and normalization utilities, and a periodogram and phased light curve service. The archive can be accessed at http://exoplanetarchive.ipac.caltech.edu.



قيم البحث

اقرأ أيضاً

ESPRESSO (Echelle SPectrograph for Rocky Exoplanets and Stable Spectroscopic Observations) is a VLT ultra-stable high resolution spectrograph that will be installed in Paranal Observatory in Chile at the end of 2017 and offered to the community by 20 18. The spectrograph will be located at the Combined-Coude Laboratory of the VLT and will be able to operate with one or (simultaneously) several of the four 8.2 m Unit Telescopes (UT) through four optical Coude trains. Combining efficiency and extreme spectroscopic precision, ESPRESSO is expected to gaining about two magnitudes with respect to its predecessor HARPS. We aim at improving the instrumental radial-velocity precision to reach the 10 cm s$^-1$ level, thus opening the possibility to explore new frontiers in the search for Earth-mass exoplanets in the habitable zone of quiet, nearby G to M-dwarfs. ESPRESSO will be certainly an important development step towards high-precision ultra-stable spectrographs on the next generation of giant telescopes such as the E-ELT.
The Exoplanet Imaging Data Challenge is a community-wide effort meant to offer a platform for a fair and common comparison of image processing methods designed for exoplanet direct detection. For this purpose, it gathers on a dedicated repository (Ze nodo), data from several high-contrast ground-based instruments worldwide in which we injected synthetic planetary signals. The data challenge is hosted on the CodaLab competition platform, where participants can upload their results. The specifications of the data challenge are published on our website. The first phase, launched on the 1st of September 2019 and closed on the 1st of October 2020, consisted in detecting point sources in two types of common data-set in the field of high-contrast imaging: data taken in pupil-tracking mode at one wavelength (subchallenge 1, also referred to as ADI) and multispectral data taken in pupil-tracking mode (subchallenge 2, also referred to as ADI mSDI). In this paper, we describe the approach, organisational lessons-learnt and current limitations of the data challenge, as well as preliminary results of the participants submissions for this first phase. In the future, we plan to provide permanent access to the standard library of data sets and metrics, in order to guide the validation and support the publications of innovative image processing algorithms dedicated to high-contrast imaging of planetary systems.
exoplanet is a toolkit for probabilistic modeling of astronomical time series data, with a focus on observations of exoplanets, using PyMC3 (Salvatier et al., 2016). PyMC3 is a flexible and high-performance model-building language and inference engin e that scales well to problems with a large number of parameters. exoplanet extends PyMC3s modeling language to support many of the custom functions and probability distributions required when fitting exoplanet datasets or other astronomical time series. While it has been used for other applications, such as the study of stellar variability, the primary purpose of exoplanet is the characterization of exoplanets or multiple star systems using time-series photometry, astrometry, and/or radial velocity. In particular, the typical use case would be to use one or more of these datasets to place constraints on the physical and orbital parameters of the system, such as planet mass or orbital period, while simultaneously taking into account the effects of stellar variability.
The direct detection of exoplanets with high-contrast instruments can be boosted with high spectral resolution. For integral field spectrographs yielding hyperspectral data, this means that the field of view consists of diffracted starlight spectra a nd a spatially localized planet. Analysis usually relies on cross-correlation with theoretical spectra. In a purely blind-search context, this supervised strategy can be biased with model mismatch and/or be computationally inefficient. Using an approach that is inspired by the remote-sensing community, we aim to propose an alternative to cross-correlation that is fully data-driven, which decomposes the data into a set of individual spectra and their corresponding spatial distributions. This strategy is called spectral unmixing. We used an orthogonal subspace projection to identify the most distinct spectra in the field of view. Their spatial distribution maps were then obtained by inverting the data. These spectra were then used to break the original hyperspectral images into their corresponding spatial distribution maps via non-negative least squares. The performance of our method was evaluated and compared with a cross-correlation using simulated hyperspectral data with medium resolution from the ELT/HARMONI integral field spectrograph. We show that spectral unmixing effectively leads to a planet detection solely based on spectral dissimilarities at significantly reduced computational cost. The extracted spectrum holds significant signatures of the planet while being not perfectly separated from residual starlight. The sensitivity of the supervised cross-correlation is three to four times higher than with unsupervised spectral unmixing, the gap is biased toward the former because the injected and correlated spectrum match perfectly. The algorithm was furthermore vetted on real data obtained with VLT/SINFONI of the beta Pictoris system.
The Gemini Planet Imager Exoplanet Survey (GPIES) is a multi-year direct imaging survey of 600 stars to discover and characterize young Jovian exoplanets and their environments. We have developed an automated data architecture to process and index al l data related to the survey uniformly. An automated and flexible data processing framework, which we term the Data Cruncher, combines multiple data reduction pipelines together to process all spectroscopic, polarimetric, and calibration data taken with GPIES. With no human intervention, fully reduced and calibrated data products are available less than an hour after the data are taken to expedite follow-up on potential objects of interest. The Data Cruncher can run on a supercomputer to reprocess all GPIES data in a single day as improvements are made to our data reduction pipelines. A backend MySQL database indexes all files, which are synced to the cloud, and a front-end web server allows for easy browsing of all files associated with GPIES. To help observers, quicklook displays show reduced data as they are processed in real-time, and chatbots on Slack post observing information as well as reduced data products. Together, the GPIES automated data processing architecture reduces our workload, provides real-time data reduction, optimizes our observing strategy, and maintains a homogeneously reduced dataset to study planet occurrence and instrument performance.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا