ترغب بنشر مسار تعليمي؟ اضغط هنا

Real-time multiframe blind deconvolution of solar images

105   0   0.0 ( 0 )
 نشر من قبل Andres Asensio Ramos
 تاريخ النشر 2018
  مجال البحث فيزياء
والبحث باللغة English
 تأليف A. Asensio Ramos




اسأل ChatGPT حول البحث

The quality of images of the Sun obtained from the ground are severely limited by the perturbing effect of the turbulent Earths atmosphere. The post-facto correction of the images to compensate for the presence of the atmosphere require the combination of high-order adaptive optics techniques, fast measurements to freeze the turbulent atmosphere and very time consuming blind deconvolution algorithms. Under mild seeing conditions, blind deconvolution algorithms can produce images of astonishing quality. They can be very competitive with those obtained from space, with the huge advantage of the flexibility of the instrumentation thanks to the direct access to the telescope. In this contribution we leverage deep learning techniques to significantly accelerate the blind deconvolution process and produce corrected images at a peak rate of ~100 images per second. We present two different architectures that produce excellent image corrections with noise suppression while maintaining the photometric properties of the images. As a consequence, polarimetric signals can be obtained with standard polarimetric modulation without any significant artifact. With the expected improvements in computer hardware and algorithms, we anticipate that on-site real-time correction of solar images will be possible in the near future.



قيم البحث

اقرأ أيضاً

In order to utilize solar imagery for real-time feature identification and large-scale data science investigations of solar structures, we need maps of the Sun where phenomena, or themes, are labeled. Since solar imagers produce observations every fe w minutes, it is not feasible to label all images by hand. Here, we compare three machine learning algorithms performing solar image classification using extreme ultraviolet and Hydrogen-alpha images: a maximum likelihood model assuming a single normal probability distribution for each theme from Rigler et al. (2012), a maximum-likelihood model with an underlying Gaussian mixtures distribution, and a random forest model. We create a small database of expert-labeled maps to train and test these algorithms. Due to the ambiguity between the labels created by different experts, a collaborative labeling is used to include all inputs. We find the random forest algorithm performs the best amongst the three algorithms. The advantages of this algorithm are best highlighted in: comparison of outputs to hand-drawn maps; response to short-term variability; and tracking long-term changes on the Sun. Our work indicates that the next generation of solar image classification algorithms would benefit significantly from using spatial structure recognition, compared to only using spectral, pixel-by-pixel brightness distributions.
Multi-wavelength solar images in the EUV are routinely used for analysing solar features such as coronal holes, filaments, and flares. However, images taken in different bands often look remarkably similar as each band receives contributions coming f rom regions with a range of different temperatures. This has motivated the search for empirical techniques that may unmix these contributions and concentrate salient morphological features of the corona in a smaller set of less redundant source images. Blind Source Separation (BSS) precisely does this. Here we show how this novel concept also provides new insight into the physics of the solar corona, using observations made by SDO/AIA. The source images are extracted using a Bayesian positive source separation technique. We show how observations made in six spectral bands, corresponding to optically thin emissions, can be reconstructed by linear combination of three sources. These sources have a narrower temperature response and allow for considerable data reduction since the pertinent information from all six bands can be condensed in only one single composite picture. In addition, they give access to empirical temperature maps of the corona. The limitations of the BSS technique and some applications are briefly discussed.
203 - Qingyun Sun , David Donoho 2021
In the blind deconvolution problem, we observe the convolution of an unknown filter and unknown signal and attempt to reconstruct the filter and signal. The problem seems impossible in general, since there are seemingly many more unknowns than knowns . Nevertheless, this problem arises in many application fields; and empirically, some of these fields have had success using heuristic methods -- even economically very important ones, in wireless communications and oil exploration. Todays fashionable heuristic formulations pose non-convex optimization problems which are then attacked heuristically as well. The fact that blind deconvolution can be solved under some repeatable and naturally-occurring circumstances poses a theoretical puzzle. To bridge the gulf between reported successes and theorys limited understanding, we exhibit a convex optimization problem that -- assuming signal sparsity -- can convert a crude approximation to the true filter into a high-accuracy recovery of the true filter. Our proposed formulation is based on L1 minimization of inverse filter outputs. We give sharp guarantees on performance of the minimizer assuming sparsity of signal, showing that our proposal precisely recovers the true inverse filter, up to shift and rescaling. There is a sparsity/initial accuracy tradeoff: the less accurate the initial approximation, the greater we rely on sparsity to enable exact recovery. To our knowledge this is the first reported tradeoff of this kind. We consider it surprising that this tradeoff is independent of dimension. We also develop finite-$N$ guarantees, for highly accurate reconstruction under $Ngeq O(k log(k) )$ with high probability. We further show stable approximation when the true inverse filter is infinitely long and extend our guarantees to the case where the observations are contaminated by stochastic or adversarial noise.
Deep imaging of the diffuse light emitted by the stellar fine structures and outer halos around galaxies is now often used to probe their past mass assembly. Because the extended halos survive longer than the relatively fragile tidal features, they t race more ancient mergers. We use images reaching surface brightness limits as low as 28.5-29 mag.arcsec-2 (g-band) to obtain light and color profiles up to 5-10 effective radii of a sample of nearby early-type galaxies. They were acquired with MegaCam as part of the CFHT MATLAS large programme. These profiles may be compared to those produced by simulations of galaxy formation and evolution, once corrected for instrumental effects. Indeed they can be heavily contaminated by the scattered light caused by internal reflections within the instrument. In particular, the nucleus of galaxies generates artificial flux in the outer halo, which has to be precisely subtracted. We present a deconvolution technique to remove the artificial halos that makes use of very large kernels. The technique based on PyOperators is more time efficient than the model-convolution methods also used for that purpose. This is especially the case for galaxies with complex structures that are hard to model. Having a good knowledge of the Point Spread Function (PSF), including its outer wings, is critical for the method. A database of MegaCam PSF models corresponding to different seeing conditions and bands was generated directly from the deep images. It is shown that the difference in the PSFs in different bands causes artificial changes in the color profiles, in particular a reddening of the outskirts of galaxies having a bright nucleus. The method is validated with a set of simulated images and applied to three representative test cases: NGC 3599, NGC 3489, and NGC 4274, and exhibiting for two of them a prominent ghost halo. The method successfully removes it.
92 - Long Xu , Wenqing Sun , Yihua Yan 2020
With Aperture synthesis (AS) technique, a number of small antennas can assemble to form a large telescope which spatial resolution is determined by the distance of two farthest antennas instead of the diameter of a single-dish antenna. Different from direct imaging system, an AS telescope captures the Fourier coefficients of a spatial object, and then implement inverse Fourier transform to reconstruct the spatial image. Due to the limited number of antennas, the Fourier coefficients are extremely sparse in practice, resulting in a very blurry image. To remove/reduce blur, CLEAN deconvolution was widely used in the literature. However, it was initially designed for point source. For extended source, like the sun, its efficiency is unsatisfied. In this study, a deep neural network, referring to Generative Adversarial Network (GAN), is proposed for solar image deconvolution. The experimental results demonstrate that the proposed model is markedly better than traditional CLEAN on solar images.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا