No Arabic abstract
Planetary spatial data returned by spacecraft, including images and higher-order products such as mosaics, controlled basemaps, and digital elevation models (DEMs), are of critical importance to NASA, its commercial partners and other space agencies. Planetary spatial data are an essential component of basic scientific research and sustained planetary exploration and operations. The Planetary Data System (PDS) is performing the essential job of archiving and serving these data, mostly in raw or calibrated form, with less support for higher-order, more ready-to-use products. However, many planetary spatial data remain not readily accessible to and/or usable by the general science user because particular skills and tools are necessary to process and interpret them from the raw initial state. There is a critical need for planetary spatial data to be more accessible and usable to researchers and stakeholders. A Planetary Spatial Data Infrastructure (PSDI) is a collection of data, tools, standards, policies, and the people that use and engage with them. A PSDI comprises an overarching support system for planetary spatial data. PSDIs (1) establish effective plans for data acquisition; (2) create and make available higher-order products; and (3) consider long-term planning for correct data acquisition, processing and serving (including funding). We recommend that Planetary Spatial Data Infrastructures be created for all bodies and key regions in the Solar System. NASA, with guidance from the planetary science community, should follow established data format standards to build foundational and framework products and use those to build and apply PDSIs to all bodies. Establishment of PSDIs is critical in the coming decade for several locations under active or imminent exploration, and for all others for future planning and current scientific analysis.
Most planetary radar applications require recording of complex voltages at sampling rates of up to 20 MHz. I describe the design and implementation of a sampling system that has been installed at the Arecibo Observatory, Goldstone Solar System Radar, and Green Bank Telescope. After many years of operation, these data-taking systems have enabled the acquisition of hundreds of data sets, many of which still await publication.
Planetary radars have obtained unique science measurements about solar system bodies and they have provided orbit determinations allowing spacecraft to be navigated throughout the solar system. Notable results have been on Venus, Earths twin, and small bodies, which are the constituents of the Suns debris disk. Together, these results have served as ground truth from the solar system for studies of extrasolar planets. The Nations planetary radar infrastructure, indeed the worlds planetary radar infrastructure, is based on astronomical and deep space telecommunications infrastructure, namely the radar transmitters at the Arecibo Observatory and the Goldstone Solar System Radar, part of NASAs Deep Space Network, along with the Green Bank Telescope as a receiving element. This white paper summarizes the state of this infrastructure and potential technical developments that should be sustained in order to enable continued studies of solar system bodies for comparison and contrast with extrasolar planetary systems. Because the planetary radar observations leverage existing infrastructure largely developed for other purposes, only operations and maintenance funding is required, though modest investments could yield more reliable systems; in the case of the Green Bank Telescope, additional funding for operations is required.
The Transiting Exoplanet Survey Satellite (TESS), launched successfully on 18th of April, 2018, will observe nearly the full sky and will provide time-series imaging data in ~27-day-long campaigns. TESS is equipped with 4 cameras; each has a field-of-view of 24x24 degrees. During the first two years of the primary mission, one of these cameras, Camera #1, is going to observe fields centered at an ecliptic latitude of 18 degrees. While the ecliptic plane itself is not covered during the primary mission, the characteristic scale height of the main asteroid belt and Kuiper belt implies that a significant amount of small solar system bodies will cross the field-of-view of this camera. Based on the comparison of the expected amount of information of TESS and Kepler/K2, we can compute the cumulative etendues of the two optical setups. This comparison results in roughly comparable optical etendues, however the net etendue is significantly larger in the case of TESS since all of the imaging data provided by the 30-minute cadence frames are downlinked rather than the pre-selected stamps of Kepler/K2. In addition, many principles of the data acquisition and optical setup are clearly different, including the level of confusing background sources, full-frame integration and cadence, the field-of-view centroid with respect to the apparent position of the Sun, as well as the differences in the duration of the campaigns. As one would expect, TESS will yield time-series photometry and hence rotational properties for only brighter objects, but in terms of spatial and phase space coverage, this sample will be more homogeneous and more complete. Here we review the main analogues and differences between the Kepler/K2 mission and the TESS mission, focusing on scientific implications and possible yields related to our Solar System.
The Europlanet-2020 programme, which ended on Aug 31st, 2019, included an activity called VESPA (Virtual European Solar and Planetary Access), which focused on adapting Virtual Observatory (VO) techniques to handle Planetary Science data. This paper describes some aspects of VESPA at the end of this 4-years development phase and at the onset of the newly selected Europlanet-2024 programme starting in 2020. The main objectives of VESPA are to facilitate searches both in big archives and in small databases, to enable data analysis by providing simple data access and online visualization functions, and to allow research teams to publish derived data in an interoperable environment as easily as possible. VESPA encompasses a wide scope, including surfaces, atmospheres, magnetospheres and planetary plasmas, small bodies, helio-physics, exoplanets, and spectroscopy in solid phase. This system relies in particular on standards and tools developed for the Astronomy VO (IVOA) and extends them where required to handle specificities of Solar System studies. It also aims at making the VO compatible with tools and protocols developed in different contexts, for instance GIS for planetary surfaces, or time series tools for plasma-related measurements. An essential part of the activity is to publish a significant amount of high-quality data in this system, with a focus on derived products resulting from data analysis or simulations.
The deluge of data from time-domain surveys is rendering traditional human-guided data collection and inference techniques impractical. We propose a novel approach for conducting data collection for science inference in the era of massive large-scale surveys that uses value-based metrics to autonomously strategize and co-ordinate follow-up in real-time. We demonstrate the underlying principles in the Recommender Engine For Intelligent Transient Tracking (REFITT) that ingests live alerts from surveys and value-added inputs from data brokers to predict the future behavior of transients and design optimal data augmentation strategies given a set of scientific objectives. The prototype presented in this paper is tested to work given simulated Rubin Observatory Legacy Survey of Space and Time (LSST) core-collapse supernova (CC SN) light-curves from the PLAsTiCC dataset. CC SNe were selected for the initial development phase as they are known to be difficult to classify, with the expectation that any learning techniques for them should be at least as effective for other transients. We demonstrate the behavior of REFITT on a random LSST night given ~32000 live CC SNe of interest. The system makes good predictions for the photometric behavior of the events and uses them to plan follow-up using a simple data-driven metric. We argue that machine-directed follow-up maximizes the scientific potential of surveys and follow-up resources by reducing downtime and bias in data collection.