No Arabic abstract
Our proposed method of random phase-free holography using virtual convergence light can obtain large reconstructed images exceeding the size of the hologram, without the assistance of random phase. The reconstructed images have low-speckle noise in the amplitude and phase-only holograms (kinoforms); however, in low-resolution holograms, we obtain a degraded image quality compared to the original image. We propose an iterative random phase-free method with virtual convergence light to address this problem.
We propose an optical encryption framework that can encrypt and decrypt large-sized images beyond the size of the encrypted image using our two methods: random phase-free method and scaled diffraction. In order to record the entire image information on the encrypted image, the large-sized images require the random phase to widely diffuse the object light over the encrypted image; however, the random phase gives rise to the speckle noise on the decrypted images, and it may be difficult to recognize the decrypted images. In order to reduce the speckle noise, we apply our random phase-free method to the framework. In addition, we employ scaled diffraction that calculates light propagation between planes with different sizes by changing the sampling rates.
In both light optics and electron optics, the amplitude of a wave scattered by an object is an observable that is usually recorded in the form of an intensity distribution in a real space image or a diffraction image. In contrast, retrieval of the phase of a scattered wave is a well-known challenge, which is usually approached by interferometric or numerical methods. In electron microscopy, as a result of constraints in the lens setup, it is particularly difficult to retrieve the phase of a diffraction image. Here, we use a defocused beam generated by a nanofabricated hologram to form a reference wave that can be interfered with a diffracted beam. This setup provides an extended interference region with the sample wavefunction in the Fraunhofer plane. As a case study, we retrieve the phase of an electron vortex beam. Beyond this specific example, the approach can be used to retrieve the wavefronts of diffracted beams from a wide range of samples.
Deep learning methods for image quality assessment (IQA) are limited due to the small size of existing datasets. Extensive datasets require substantial resources both for generating publishable content and annotating it accurately. We present a systematic and scalable approach to creating KonIQ-10k, the largest IQA dataset to date, consisting of 10,073 quality scored images. It is the first in-the-wild database aiming for ecological validity, concerning the authenticity of distortions, the diversity of content, and quality-related indicators. Through the use of crowdsourcing, we obtained 1.2 million reliable quality ratings from 1,459 crowd workers, paving the way for more general IQA models. We propose a novel, deep learning model (KonCept512), to show an excellent generalization beyond the test set (0.921 SROCC), to the current state-of-the-art database LIVE-in-the-Wild (0.825 SROCC). The model derives its core performance from the InceptionResNet architecture, being trained at a higher resolution than previous models (512x384). Correlation analysis shows that KonCept512 performs similar to having 9 subjective scores for each test image.
We propose an iterative improvement method for the Harrow-Hassidim-Lloyd (HHL) algorithm to solve a linear system of equations. This is a quantum-classical hybrid algorithm. The accuracy is essential to solve the linear system of equations. However, the accuracy of the HHL algorithm is limited by the number of quantum bits used to express the eigenvalues of the matrix. Our iterative method improves the accuracy of the HHL solutions, and gives higher accuracy which surpasses the accuracy limited by the number of quantum bits. In practical HHL algorithm, a huge number of measurements is required to obtain good accuracy, even if we provide a sufficient number of quantum bits for the eigenvalue expression, since the solution is statistically processed from the measurements. Our improved iterative method can reduce the number of measurements. Moreover, the sign information for each eigenstate of the solution is lost once the measurement is made, although the sign is significant. Therefore, the naive iterative method of the HHL algorithm may slow down, especially, when the solution includes wrong signs. In this paper, we propose and evaluate an improved iterative method for the HHL algorithm that is robust against the sign information loss, in terms of the number of iterations and the computational accuracy.
Customization of processor architectures through Instruction Set Extensions (ISEs) is an effective way to meet the growing performance demands of embedded applications. A high-quality ISE generation approach needs to obtain results close to those achieved by experienced designers, particularly for complex applications that exhibit regularity: expert designers are able to exploit manually such regularity in the data flow graphs to generate high-quality ISEs. In this paper, we present ISEGEN, an approach that identifies high-quality ISEs by iterative improvement following the basic principles of the well-known Kernighan-Lin (K-L) min-cut heuristic. Experimental results on a number of MediaBench, EEMBC and cryptographic applications show that our approach matches the quality of the optimal solution obtained by exhaustive search. We also show that our ISEGEN technique is on average 20x faster than a genetic formulation that generates equivalent solutions. Furthermore, the ISEs identified by our technique exhibit 35% more speedup than the genetic solution on a large cryptographic application (AES) by effectively exploiting its regular structure.