ترغب بنشر مسار تعليمي؟ اضغط هنا

Image Coaddition with Temporally Varying Kernels

60   0   0.0 ( 0 )
 نشر من قبل Darren Homrighausen
 تاريخ النشر 2010
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

Large, multi-frequency imaging surveys, such as the Large Synaptic Survey Telescope (LSST), need to do near-real time analysis of very large datasets. This raises a host of statistical and computational problems where standard methods do not work. In this paper, we study a proposed method for combining stacks of images into a single summary image, sometimes referred to as a template. This task is commonly referred to as image coaddition. In part, we focus on a method proposed in previous work, which outlines a procedure for combining stacks of images in an online fashion in the Fourier domain. We evaluate this method by comparing it to two straightforward methods through the use of various criteria and simulations. Note that the goal is not to propose these comparison methods for use in their own right, but to ensure that additional complexity also provides substantially improved performance.


قيم البحث

اقرأ أيضاً

In the coming decade, astronomical surveys of the sky will generate tens of terabytes of images and detect hundreds of millions of sources every night. The study of these sources will involve computation challenges such as anomaly detection and class ification, and moving object tracking. Since such studies benefit from the highest quality data, methods such as image coaddition (stacking) will be a critical preprocessing step prior to scientific investigation. With a requirement that these images be analyzed on a nightly basis to identify moving sources or transient objects, these data streams present many computational challenges. Given the quantity of data involved, the computational load of these problems can only be addressed by distributing the workload over a large number of nodes. However, the high data throughput demanded by these applications may present scalability challenges for certain storage architectures. One scalable data-processing method that has emerged in recent years is MapReduce, and in this paper we focus on its popular open-source implementation called Hadoop. In the Hadoop framework, the data is partitioned among storage attached directly to worker nodes, and the processing workload is scheduled in parallel on the nodes that contain the required input data. A further motivation for using Hadoop is that it allows us to exploit cloud computing resources, e.g., Amazons EC2. We report on our experience implementing a scalable image-processing pipeline for the SDSS imaging database using Hadoop. This multi-terabyte imaging dataset provides a good testbed for algorithm development since its scope and structure approximate future surveys. First, we describe MapReduce and how we adapted image coaddition to the MapReduce framework. Then we describe a number of optimizations to our basic approach and report experimental results comparing their performance.
In this paper, the regularity results for the integro-differential operators of the fractional Laplacian type by Caffarelli and Silvestre cite{CS1} are extended to those for the integro-differential operators associated with symmetric, regularly vary ing kernels at zero. In particular, we obtain the uniform Harnack inequality and Holder estimate of viscosity solutions to the nonlinear integro-differential equations associated with the kernels $K_{sigma, beta}$ satisfying $$ K_{sigma,beta}(y)asymp frac{ 2-sigma}{|y|^{n+sigma}}left( logfrac{2}{|y|^2}right)^{beta(2-sigma)}quad mbox{near zero} $$ with respect to $sigmain(0,2)$ close to $2$ (for a given $betainmathbb R$), where the regularity estimates do not blow up as the order $ sigmain(0,2)$ tends to $2.$
117 - Yi Zhou , Chenglei Wu , Zimo Li 2020
Learning latent representations of registered meshes is useful for many 3D tasks. Techniques have recently shifted to neural mesh autoencoders. Although they demonstrate higher precision than traditional methods, they remain unable to capture fine-gr ained deformations. Furthermore, these methods can only be applied to a template-specific surface mesh, and is not applicable to more general meshes, like tetrahedrons and non-manifold meshes. While more general graph convolution methods can be employed, they lack performance in reconstruction precision and require higher memory usage. In this paper, we propose a non-template-specific fully convolutional mesh autoencoder for arbitrary registered mesh data. It is enabled by our novel convolution and (un)pooling operators learned with globally shared weights and locally varying coefficients which can efficiently capture the spatially varying contents presented by irregular mesh connections. Our model outperforms state-of-the-art methods on reconstruction accuracy. In addition, the latent codes of our network are fully localized thanks to the fully convolutional structure, and thus have much higher interpolation capability than many traditional 3D mesh generation models.
We build on a long-standing tradition in astronomical adaptive optics (AO) of specifying performance metrics and error budgets using linear systems modeling in the spatial-frequency domain. Our goal is to provide a comprehensive tool for the calculat ion of error budgets in terms of residual temporally filtered phase power spectral densities and variances. In addition, the fast simulation of AO-corrected point spread functions (PSFs) provided by this method can be used as inputs for simulations of science observations with next-generation instruments and telescopes, in particular to predict post-coronagraphic contrast improvements for planet finder systems. We extend the previous results and propose the synthesis of a distributed Kalman filter to mitigate both aniso-servo-lag and aliasing errors whilst minimizing the overall residual variance. We discuss applications to (i) analytic AO-corrected PSF modeling in the spatial-frequency domain, (ii) post-coronagraphic contrast enhancement, (iii) filter optimization for real-time wavefront reconstruction, and (iv) PSF reconstruction from system telemetry. Under perfect knowledge of wind velocities, we show that $sim$60 nm rms error reduction can be achieved with the distributed Kalman filter embodying anti- aliasing reconstructors on 10 m class high-order AO systems, leading to contrast improvement factors of up to three orders of magnitude at few ${lambda}/D$ separations ($sim1-5{lambda}/D$) for a 0 magnitude star and reaching close to one order of magnitude for a 12 magnitude star.
A simple model of an atomic Bose-Einstein condensate in a box whose size varies with time is studied to determine the nature of adiabaticity in the nonlinear dynamics obtained within the Gross-Pitaevskii equation (the nonlinear Schr{o}dinger equation ). Analytical and numerical methods are used to determine the nature of adiabaticity in this nonlinear quantum system. Criteria for validity of an adiabatic approximation are formulated.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا