No Arabic abstract
By linking widely separated radio dishes, the technique of very long baseline interferometry (VLBI) can greatly enhance angular resolution in radio astronomy. However, at any given moment, a VLBI array only sparsely samples the information necessary to form an image. Conventional imaging techniques partially overcome this limitation by making the assumption that the observed cosmic source structure does not evolve over the duration of an observation, which enables VLBI networks to accumulate information as the Earth rotates and changes the projected array geometry. Although this assumption is appropriate for nearly all VLBI, it is almost certainly violated for submillimeter observations of the Galactic Center supermassive black hole, Sagittarius A* (Sgr A*), which has a gravitational timescale of only ~20 seconds and exhibits intra-hour variability. To address this challenge, we develop several techniques to reconstruct dynamical images (movies) from interferometric data. Our techniques are applicable to both single-epoch and multi-epoch variability studies, and they are suitable for exploring many different physical processes including flaring regions, stable images with small time-dependent perturbations, steady accretion dynamics, or kinematics of relativistic jets. Moreover, dynamical imaging can be used to estimate time-averaged images from time-variable data, eliminating many spurious image artifacts that arise when using standard imaging methods. We demonstrate the effectiveness of our techniques using synthetic observations of simulated black hole systems and 7mm Very Long Baseline Array observations of M87, and we show that dynamical imaging is feasible for Event Horizon Telescope observations of Sgr A*.
We propose a new approach, based on the Hanbury Brown and Twiss intensity interferometry, to transform a Cherenkov telescope to its equivalent optical telescope. We show that, based on the use of photonics components borrowed from quantum-optical applications, we can recover spatial details of the observed source down to the diffraction limit of the Cherenkov telescope, set by its diameter at the mean wavelength of observation. For this, we propose to apply aperture synthesis techniques from pairwise and triple correlation of sub-pupil intensities, in order to reconstruct the image of a celestial source from its Fourier moduli and phase information, despite atmospheric turbulence. We examine the sensitivity of the method, i.e. limiting magnitude, and its implementation on existing or future high energy arrays of Cherenkov telescopes. We show that despite its poor optical quality compared to extremely large optical telescopes under construction, a Cherenkov telescope can provide diffraction limited imaging of celestial sources, in particular at the visible, down to violet wavelengths.
We present a flexible code created for imaging from the bispectrum and visibility-squared. By using a simulated annealing method, we limit the probability of converging to local chi-squared minima as can occur when traditional imaging methods are used on data sets with limited phase information. We present the results of our code used on a simulated data set utilizing a number of regularization schemes including maximum entropy. Using the statistical properties from Monte-Carlo Markov chains of images, we show how this code can place statistical limits on image features such as unseen binary companions.
The number of publications of aperture-synthesis images based on optical long-baseline interferometry measurements has recently increased due to easier access to visible and infrared interferometers. The interferometry technique has now reached a technical maturity level that opens new avenues for numerous astrophysical topics requiring milli-arcsecond model-independent imaging. In writing this paper our motivation was twofold: 1) review and publicize emblematic excerpts of the impressive corpus accumulated in the field of optical interferometry image reconstruction; 2) discuss future prospects for this technique by selecting four representative astrophysical science cases in order to review the potential benefits of using optical long baseline interferometers. For this second goal we have simulated interferometric data from those selected astrophysical environments and used state-of-the-art codes to provide the reconstructed images that are reachable with current or soon-to-be facilities. The image reconstruction process was blind in the sense that reconstructors had no knowledge of the input brightness distributions. We discuss the impact of optical interferometry in those four astrophysical fields. We show that image reconstruction software successfully provides accurate morphological information on a variety of astrophysical topics and review the current strengths and weaknesses of such reconstructions. We investigate how to improve image reconstruction and the quality of the image possibly by upgrading the current facilities. We finally argue that optical interferometers and their corresponding instrumentation, existing or to come, with 6 to 10 telescopes, should be well suited to provide images of complex sceneries.
We consider the probe of astrophysical signals through radio interferometers with small field of view and baselines with non-negligible and constant component in the pointing direction. In this context, the visibilities measured essentially identify with a noisy and incomplete Fourier coverage of the product of the planar signals with a linear chirp modulation. In light of the recent theory of compressed sensing and in the perspective of defining the best possible imaging techniques for sparse signals, we analyze the related spread spectrum phenomenon and suggest its universality relative to the sparsity dictionary. Our results rely both on theoretical considerations related to the mutual coherence between the sparsity and sensing dictionaries, as well as on numerical simulations.
In radio interferometry imaging, the gridding procedure of convolving visibilities with a chosen gridding function is necessary to transform visibility values into uniformly sampled grid points. We propose here a parameterised family of least-misfit gridding functions which minimise an upper bound on the difference between the DFT and FFT dirty images for a given gridding support width and image cropping ratio. When compared with the widely used spheroidal function with similar parameters, these provide more than 100 times better alias suppression and RMS misfit reduction over the usable dirty map. We discuss how appropriate parameter selection and tabulation of these functions allow for a balance between accuracy, computational cost and storage size. Although it is possible to reduce the errors introduced in the gridding or degridding process to the level of machine precision, accuracy comparable to that achieved by CASA requires only a lookup table with 300 entries and a support width of 3, allowing for a greatly reduced computation cost for a given performance.