ترغب بنشر مسار تعليمي؟ اضغط هنا

Discrete Gyrator Transforms: Computational Algorithms and Applications

44   0   0.0 ( 0 )
 نشر من قبل Shih-Gu Huang
 تاريخ النشر 2017
والبحث باللغة English




اسأل ChatGPT حول البحث

As an extension of the 2D fractional Fourier transform (FRFT) and a special case of the 2D linear canonical transform (LCT), the gyrator transform was introduced to produce rotations in twisted space/spatial-frequency planes. It is a useful tool in optics, signal processing and image processing. In this paper, we develop discrete gyrator transforms (DGTs) based on the 2D LCT. Taking the advantage of the additivity property of the 2D LCT, we propose three kinds of DGTs, each of which is a cascade of low-complexity operators. These DGTs have different constraints, characteristics, and properties, and are realized by different computational algorithms. Besides, we propose a kind of DGT based on the eigenfunctions of the gyrator transform. This DGT is an orthonormal transform, and thus its comprehensive properties, especially the additivity property, make it more useful in many applications. We also develop an efficient computational algorithm to significantly reduce the complexity of this DGT. At the end, a brief review of some important applications of the DGTs is presented, including mode conversion, sampling and reconstruction, watermarking, and image encryption.



قيم البحث

اقرأ أيضاً

In the framework of the Hough transform technique to detect curves in images, we provide a bound for the number of Hough transforms to be considered for a successful optimization of the accumulator function in the recognition algorithm. Such a bound is consequence of geometrical arguments. We also show the robustness of the results when applied to synthetic datasets strongly perturbed by noise. An algebraic approach, discussed in the appendix, leads to a better bound of theoretical interest in the exact case.
Computational ghost imaging (CGI) is a single-pixel imaging technique that exploits the correlation between known random patterns and the measured intensity of light transmitted (or reflected) by an object. Although CGI can obtain two- or three- dime nsional images with a single or a few bucket detectors, the quality of the reconstructed images is reduced by noise due to the reconstruction of images from random patterns. In this study, we improve the quality of CGI images using deep learning. A deep neural network is used to automatically learn the features of noise-contaminated CGI images. After training, the network is able to predict low-noise images from new noise-contaminated CGI images.
A classical computer does not allow to calculate a discrete cosine transform on N points in less than linear time. This trivial lower bound is no longer valid for a computer that takes advantage of quantum mechanical superposition, entanglement, and interference principles. In fact, we show that it is possible to realize the discrete cosine transforms and the discrete sine transforms of size NxN and types I,II,III, and IV with as little as O(log^2 N) operations on a quantum computer, whereas the known fast algorithms on a classical computer need O(N log N) operations.
Photonic topology optimization is a technique used to find the electric permittivity distribution of a device that optimizes an electromagnetic figure-of-merit. Two common techniques are used: continuous density-based optimizations that optimize a gr ey-scale permittivity defined over a grid, and discrete level-set optimizations that optimize the shape of the material boundary of a device. More recently, continuous optimizations have been used to find an initial seed for a concluding level-set optimization since level-set techniques tend to benefit from a well-performing initial structure. However, continuous optimizations are not guaranteed to yield sufficient initial seeds for subsequent level-set optimizations, particularly for high-contrast structures, since they are not guaranteed to converge to solutions that resemble only two discrete materials. In this work, we present a method for constraining a continuous optimization such that it converges to a discrete solution. This is done by inserting a constrained sub-optimization at each iteration of an overall gradient-based optimization. This technique can be used purely on its own to optimize a device, or it can be used to provide a nearly discrete starting point for a level-set optimization.
The generative adversarial network (GAN) framework has emerged as a powerful tool for various image and video synthesis tasks, allowing the synthesis of visual content in an unconditional or input-conditional manner. It has enabled the generation of high-resolution photorealistic images and videos, a task that was challenging or impossible with prior methods. It has also led to the creation of many new applications in content creation. In this paper, we provide an overview of GANs with a special focus on algorithms and applications for visual synthesis. We cover several important techniques to stabilize GAN training, which has a reputation for being notoriously difficult. We also discuss its applications to image translation, image processing, video synthesis, and neural rendering.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا