ترغب بنشر مسار تعليمي؟ اضغط هنا

Progressive Refinement Imaging

136   0   0.0 ( 0 )
 نشر من قبل Markus Kluge
 تاريخ النشر 2019
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

This paper presents a novel technique for progressive online integration of uncalibrated image sequences with substantial geometric and/or photometric discrepancies into a single, geometrically and photometrically consistent image. Our approach can handle large sets of images, acquired from a nearly planar or infinitely distant scene at different resolutions in object domain and under variable local or global illumination conditions. It allows for efficient user guidance as its progressive nature provides a valid and consistent reconstruction at any moment during the online refinement process. Our approach avoids global optimization techniques, as commonly used in the field of image refinement, and progressively incorporates new imagery into a dynamically extendable and memory-efficient Laplacian pyramid. Our image registration process includes a coarse homography and a local refinement stage using optical flow. Photometric consistency is achieved by retaining the photometric intensities given in a reference image, while it is being refined. Globally blurred imagery and local geometric inconsistencies due to, e.g. motion are detected and removed prior to image fusion. We demonstrate the quality and robustness of our approach using several image and video sequences, including handheld acquisition with mobile phones and zooming sequences with consumer cameras.

قيم البحث

اقرأ أيضاً

90 - Jules Vidal , Joseph Budin , 2019
This paper presents an efficient algorithm for the progressive approximation of Wasserstein barycenters of persistence diagrams, with applications to the visual analysis of ensemble data. Given a set of scalar fields, our approach enables the computa tion of a persistence diagram which is representative of the set, and which visually conveys the number, data ranges and saliences of the main features of interest found in the set. Such representative diagrams are obtained by computing explicitly the discrete Wasserstein barycenter of the set of persistence diagrams, a notoriously computationally intensive task. In particular, we revisit efficient algorithms for Wasserstein distance approximation [12,51] to extend previous work on barycenter estimation [94]. We present a new fast algorithm, which progressively approximates the barycenter by iteratively increasing the computation accuracy as well as the number of persistent features in the output diagram. Such a progressivity drastically improves convergence in practice and allows to design an interruptible algorithm, capable of respecting computation time constraints. This enables the approximation of Wasserstein barycenters within interactive times. We present an application to ensemble clustering where we revisit the k-means algorithm to exploit our barycenters and compute, within execution time constraints, meaningful clusters of ensemble data along with their barycenter diagram. Extensive experiments on synthetic and real-life data sets report that our algorithm converges to barycenters that are qualitatively meaningful with regard to the applications, and quantitatively comparable to previous techniques, while offering an order of magnitude speedup when run until convergence (without time constraint). Our algorithm can be trivially parallelized to provide additional speedups in practice on standard workstations. [...]
We propose Mask Guided (MG) Matting, a robust matting framework that takes a general coarse mask as guidance. MG Matting leverages a network (PRN) design which encourages the matting model to provide self-guidance to progressively refine the uncertai n regions through the decoding process. A series of guidance mask perturbation operations are also introduced in the training to further enhance its robustness to external guidance. We show that PRN can generalize to unseen types of guidance masks such as trimap and low-quality alpha matte, making it suitable for various application pipelines. In addition, we revisit the foreground color prediction problem for matting and propose a surprisingly simple improvement to address the dataset issue. Evaluation on real and synthetic benchmarks shows that MG Matting achieves state-of-the-art performance using various types of guidance inputs. Code and models are available at https://github.com/yucornetto/MGMatting.
This paper introduces progressive algorithms for the topological analysis of scalar data. Our approach is based on a hierarchical representation of the input data and the fast identification of topologically invariant vertices, which are vertices tha t have no impact on the topological description of the data and for which we show that no computation is required as they are introduced in the hierarchy. This enables the definition of efficient coarse-to-fine topological algorithms, which leverage fast update mechanisms for ordinary vertices and avoid computation for the topologically invariant ones. We demonstrate our approach with two examples of topological algorithms (critical point extraction and persistence diagram computation), which generate interpretable outputs upon interruption requests and which progressively refine them otherwise. Experiments on real-life datasets illustrate that our progressive strategy, in addition to the continuous visual feedback it provides, even improves run time performance with regard to non-progressive algorithms and we describe further accelerations with shared-memory parallelism. We illustrate the utility of our approach in batch-mode and interactive setups, where it respectively enables the control of the execution time of complete topological pipelines as well as previews of the topological features found in a dataset, with progressive updates delivered within interactive times.
We present a technique for rendering highly complex 3D scenes in real-time by generating uniformly distributed points on the scenes visible surfaces. The technique is applicable to a wide range of scene types, like scenes directly based on complex an d detailed CAD data consisting of billions of polygons (in contrast to scenes handcrafted solely for visualization). This allows to visualize such scenes smoothly even in VR on a HMD with good image quality, while maintaining the necessary frame-rates. In contrast to other point based rendering methods, we place points in an approximated blue noise distribution only on visible surfaces and store them in a highly GPU efficient data structure, allowing to progressively refine the number of rendered points to maximize the image quality for a given target frame rate. Our evaluation shows that scenes consisting of a high amount of polygons can be rendered with interactive frame rates with good visual quality on standard hardware.
Modern coronagraph design relies on advanced, large-scale optimization processes that require an ever increasing amount of computational resources. In this paper, we restrict ourselves to the design of Apodized Pupil Lyot Coronagraphs (APLCs). To pro duce APLC designs for future giant space telescopes, we require a fine sampling for the apodizer to resolve all small features, such as segment gaps, in the telescope pupil. Additionally, we require the coronagraph to operate in broadband light and be insensitive to small misalignments of the Lyot stop. For future designs we want to include passive suppression of low-order aberrations and finite stellar diameters. The memory requirements for such an optimization would exceed multiple terabytes for the problem matrix alone. We therefore want to reduce the number of variables and constraints to minimize the size of the problem matrix. We show how symmetries in the pupil and Lyot stop are expressed in the complete optimization problem, and allow removal of both variables and constraints. Each mirror symmetry reduces the problem size by a factor of four. Secondly, we introduce progressive refinement, which uses low-resolution optimizations as a prior for higher resolutions. This lets us remove the majority of variables from the high-resolution optimization. Together these two improvements require up to 256x less computer memory, with a corresponding speed increase. This allows for greater exploration of the phase space of the focal-plane mask and Lyot-stop geometry, and easier simulation of sensitivity to Lyot-stop misalignments. Moreover, apodizers can now be optimized at their native manufactured resolution.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا