ترغب بنشر مسار تعليمي؟ اضغط هنا

Regularization Techniques for PSF-Matching Kernels. I. Choice of Kernel Basis

261   0   0.0 ( 0 )
 نشر من قبل Andrew Becker
 تاريخ النشر 2012
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

We review current methods for building PSF-matching kernels for the purposes of image subtraction or coaddition. Such methods use a linear decomposition of the kernel on a series of basis functions. The correct choice of these basis functions is fundamental to the efficiency and effectiveness of the matching - the chosen bases should represent the underlying signal using a reasonably small number of shapes, and/or have a minimum number of user-adjustable tuning parameters. We examine methods whose bases comprise multiple Gauss-Hermite polynomials, as well as a form free basis composed of delta-functions. Kernels derived from delta-functions are unsurprisingly shown to be more expressive; they are able to take more general shapes and perform better in situations where sum-of-Gaussian methods are known to fail. However, due to its many degrees of freedom (the maximum number allowed by the kernel size) this basis tends to overfit the problem, and yields noisy kernels having large variance. We introduce a new technique to regularize these delta-function kernel solutions, which bridges the gap between the generality of delta-function kernels, and the compactness of sum-of-Gaussian kernels. Through this regularization we are able to create general kernel solutions that represent the intrinsic shape of the PSF-matching kernel with only one degree of freedom, the strength of the regularization lambda. The role of lambda is effectively to exchange variance in the resulting difference image with variance in the kernel itself. We examine considerations in choosing the value of lambda, including statistical risk estimators and the ability of the solution to predict solutions for adjacent areas. Both of these suggest moderate strengths of lambda between 0.1 and 1.0, although this optimization is likely dataset dependent.



قيم البحث

اقرأ أيضاً

We present the implementation and use of algorithms for matching point-spread functions (PSFs) within the Pan-STARRS Image Processing Pipeline (IPP). PSF-matching is an essential part of the IPP for the detection of supernovae and asteroids, but it i s also used to homogenize the PSF of inputs to stacks, resulting in improved photometric precision compared to regular coaddition, especially in data with a high masked fraction. We report our experience in constructing and operating the image subtraction pipeline, and make recommendations about particular basis functions for constructing the PSF-matching convolution kernel, determining a suitable kernel, parallelisation and quality metrics. We introduce a method for reliably tracking the noise in an image throughout the pipeline, using the combination of a variance map and a `covariance pseudo-matrix. We demonstrate these algorithms with examples from both simulations and actual data from the Pan-STARRS1 telescope.
Precise photometric and astrometric measurements on astronomical images require an accurate knowledge of the Point Spread Function (PSF). When the PSF cannot be modelled directly from the image, PSF-reconstruction techniques become the only viable so lution. So far, however, their performance on real observations has rarely been quantified. Aims. In this Letter, we test the performance of a novel hybrid technique, called PRIME, on Adaptive Optics-assisted SPHERE/ZIMPOL observations of the Galactic globular cluster NGC6121. Methods. PRIME couples PSF-reconstruction techniques, based on control-loop data and direct image fitting performed on the only bright point-like source available in the field of view of the ZIMPOL exposures, with the aim of building the PSF model. Results. By exploiting this model, the magnitudes and positions of the stars in the field can be measured with an unprecedented precision, which surpasses that obtained by more standard methods by at least a factor of four for on-axis stars and by up to a factor of two on fainter, off-axis stars. Conclusions. Our results demonstrate the power of PRIME in recovering precise magnitudes and positions when the information directly coming from astronomical images is limited to only a few point-like sources and, thus, paving the way for a proper analysis of future Extremely Large Telescope observations of sparse stellar fields or individual extragalactic objects.
We present an analysis of instrument performance using new observations taken with the Coronagraphic High Angular Resolution Imaging Spectrograph (CHARIS) instrument and the Subaru Coronagraphic Extreme Adaptive Optics (SCExAO) system. In a correlati on analysis of our datasets (which use the broadband mode covering J through K band in a single spectrum), we find that chromaticity in the SCExAO/CHARIS system is generally worse than temporal stability. We also develop a point spread function (PSF) subtraction pipeline optimized for the CHARIS broadband mode, including a forward modelling-based exoplanet algorithmic throughput correction scheme. We then present contrast curves using this newly developed pipeline. An analogous subtraction of the same datasets using only the H band slices yields the same final contrasts as the full JHK sequences; this result is consistent with our chromaticity analysis, illustrating that PSF subtraction using spectral differential imaging (SDI) in this broadband mode is generally not more effective than SDI in the individual J, H, or K bands. In the future, the data processing framework and analysis developed in this paper will be important to consider for additional SCExAO/CHARIS broadband observations and other ExAO instruments which plan to implement a similar integral field spectrograph broadband mode.
171 - Nils Kriege 2012
We propose graph kernels based on subgraph matchings, i.e. structure-preserving bijections between subgraphs. While recently proposed kernels based on common subgraphs (Wale et al., 2008; Shervashidze et al., 2009) in general can not be applied to at tributed graphs, our approach allows to rate mappings of subgraphs by a flexible scoring scheme comparing vertex and edge attributes by kernels. We show that subgraph matching kernels generalize several known kernels. To compute the kernel we propose a graph-theoretical algorithm inspired by a classical relation between common subgraphs of two graphs and cliques in their product graph observed by Levi (1973). Encouraging experimental results on a classification task of real-world graphs are presented.
164 - I. Loris , H. Douma , G. Nolet 2010
The effects of several nonlinear regularization techniques are discussed in the framework of 3D seismic tomography. Traditional, linear, $ell_2$ penalties are compared to so-called sparsity promoting $ell_1$ and $ell_0$ penalties, and a total variati on penalty. Which of these algorithms is judged optimal depends on the specific requirements of the scientific experiment. If the correct reproduction of model amplitudes is important, classical damping towards a smooth model using an $ell_2$ norm works almost as well as minimizing the total variation but is much more efficient. If gradients (edges of anomalies) should be resolved with a minimum of distortion, we prefer $ell_1$ damping of Daubechies-4 wavelet coefficients. It has the additional advantage of yielding a noiseless reconstruction, contrary to simple $ell_2$ minimization (`Tikhonov regularization) which should be avoided. In some of our examples, the $ell_0$ method produced notable artifacts. In addition we show how nonlinear $ell_1$ methods for finding sparse models can be competitive in speed with the widely used $ell_2$ methods, certainly under noisy conditions, so that there is no need to shun $ell_1$ penalizations.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا