No Arabic abstract
Both NASAs Solar Dynamics Observatory (SDO) and the JAXA/NASA Hinode mission include spectropolarimetric instruments designed to measure the photospheric magnetic field. SDOs Helioseismic and Magnetic Imager (HMI) emphasizes full-disk high-cadence and good spatial resolution data acquisition while Hinodes Solar Optical Telescope Spectro-Polarimeter (SOT-SP) focuses on high spatial resolution and spectral sampling at the cost of a limited field of view and slower temporal cadence. This work introduces a deep-learning system named SynthIA (Synthetic Inversion Approximation), that can enhance both missions by capturing the best of each instruments characteristics. We use SynthIA to produce a new magnetogram data product, SynodeP (Synthetic Hinode Pipeline), that mimics magnetograms from the higher spectral resolution Hinode/SOT-SP pipeline, but is derived from full-disk, high-cadence, and lower spectral-resolution SDO/HMI Stokes observations. Results on held-out data show that SynodeP has good agreement with the Hinode/SOT-SP pipeline
The Helioseismic and Magnetic Imager (HMI) onboard NASAs Solar Dynamics Observatory (SDO) produces estimates of the photospheric magnetic field which are a critical input to many space weather modelling and forecasting systems. The magnetogram products produced by HMI and its analysis pipeline are the result of a per-pixel optimization that estimates solar atmospheric parameters and minimizes disagreement between a synthesized and observed Stokes vector. In this paper, we introduce a deep learning-based approach that can emulate the existing HMI pipeline results two orders of magnitude faster than the current pipeline algorithms. Our system is a U-Net trained on input Stokes vectors and their accompanying optimization-based VFISV
This paper describes the main characteristics of the Virtual Observatory as a research infrastructure in Astronomy, and identifies those fields in which it can be of help for the community of spectral stellar libraries.
We develop and apply an enhanced regularization algorithm, used in RHESSI X-ray spectral analysis, to constrain the ill-posed inverse problem that is determining the DEM from solar observations. We demonstrate this computationally fast technique applied to a range of DEM models simulating broadband imaging data from SDO/AIA and high resolution line spectra from Hinode/EIS, as well as actual active region observations with Hinode/EIS and XRT. As this regularization method naturally provides both vertical and horizontal (temperature resolution) error bars we are able to test the role of uncertainties in the data and response functions. The regularization method is able to successfully recover the DEM from simulated data of a variety of model DEMs (single Gaussian, multiple Gaussians and CHIANTI DEM models). It is able to do this, at best, to over four orders of magnitude in DEM space but typically over two orders of magnitude from peak emission. The combination of horizontal and vertical error bars and the regularized solution matrix allows us to easily determine the accuracy and robustness of the regularized DEM. We find that the typical range for the horizontal errors is $Delta$log$Tapprox 0.1 -0.5$ and this is dependent on the observed signal to noise, uncertainty in the response functions as well as the source model and temperature. With Hinode/EIS an uncertainty of 20% greatly broadens the regularized DEMs for both Gaussian and CHIANTI models although information about the underlying DEMs is still recoverable. When applied to real active region observations with Hinode/EIS and XRT the regularization method is able to recover a DEM similar to that found via a MCMC method but in considerably less computational time.
The Virtual Observatory is a new technology of the astronomical research allowing the seamless processing and analysis of a heterogeneous data obtained from a number of distributed data archives. It may also provide astronomical community with powerful computational and data processing on-line services replacing the custom scientific code run on users computers. Despite its benefits the VO technology has been still little exploited in stellar spectroscopy. As an example of possible evolution in this field we present an experimental web-based service for disentangling of spectra based on code KOREL. This code developed by P. Hadrava enables Fourier disentangling and line-strength photometry, i.e. simultaneous decomposition of spectra of multiple stars and solving for orbital parameters, line-profile variability or other physical parameters of observed objects. We discuss the benefits of the service-oriented approach from the point of view of both developers and users and give examples of possible user-friendly implementation of spectra disentangling methods as a standard tools of Virtual Observatory.
The Europlanet-2020 programme, which ended on Aug 31st, 2019, included an activity called VESPA (Virtual European Solar and Planetary Access), which focused on adapting Virtual Observatory (VO) techniques to handle Planetary Science data. This paper describes some aspects of VESPA at the end of this 4-years development phase and at the onset of the newly selected Europlanet-2024 programme starting in 2020. The main objectives of VESPA are to facilitate searches both in big archives and in small databases, to enable data analysis by providing simple data access and online visualization functions, and to allow research teams to publish derived data in an interoperable environment as easily as possible. VESPA encompasses a wide scope, including surfaces, atmospheres, magnetospheres and planetary plasmas, small bodies, helio-physics, exoplanets, and spectroscopy in solid phase. This system relies in particular on standards and tools developed for the Astronomy VO (IVOA) and extends them where required to handle specificities of Solar System studies. It also aims at making the VO compatible with tools and protocols developed in different contexts, for instance GIS for planetary surfaces, or time series tools for plasma-related measurements. An essential part of the activity is to publish a significant amount of high-quality data in this system, with a focus on derived products resulting from data analysis or simulations.