No Arabic abstract
In this work, we propose using camera arrays coupled with coherent illumination as an effective method of improving spatial resolution in long distance images by a factor of ten and beyond. Recent advances in ptychography have demonstrated that one can image beyond the diffraction limit of the objective lens in a microscope. We demonstrate a similar imaging system to image beyond the diffraction limit in long range imaging. We emulate a camera array with a single camera attached to an X-Y translation stage. We show that an appropriate phase retrieval based reconstruction algorithm can be used to effectively recover the lost high resolution details from the multiple low resolution acquired images. We analyze the effects of noise, required degree of image overlap, and the effect of increasing synthetic aperture size on the reconstructed image quality. We show that coherent camera arrays have the potential to greatly improve imaging performance. Our simulations show resolution gains of 10x and more are achievable. Furthermore, experimental results from our proof-of-concept systems show resolution gains of 4x-7x for real scenes. Finally, we introduce and analyze in simulation a new strategy to capture macroscopic Fourier Ptychography images in a single snapshot, albeit using a camera array.
Recently it has been proposed that a planar slab of material, for which both the permittivity and permeability have the values of -1, could bring not only the propagating fields associated with a source to a focus, but could also refocus the nonpropagating near-fields, thereby achieving a subdiffraction image. In this work we discuss the sensitivity of the subwavelength focus to various slab parameters, pointing out the connection to slab plasmon modes. We also note and resolve a paradox associated with the perfect imaging of a point source. We conclude that subwavelength resolution is achievable with available technology, but only by implementation of a critical set of design parameters.
We propose a novel quantum diffraction imaging technique whereby one photon of an entangled pair is diffracted off a sample and detected in coincidence with its twin. The image is obtained by scanning the photon that did not interact with matter. We show that when a dynamical quantum system interacts with an external field, the phase information is imprinted in the state of the field in a detectable way. The contribution to the signal from photons that interact with the sample scales as $propto I_{p}^{1/2}$, where $I_{p}$ is the source intensity, compared to $propto I_{p}$ of classical diffraction. This makes imaging with weak-field possible, avoiding damage to delicate samples. A Schmidt decomposition of the state of the field can be used for image enhancement by reweighting the Schmidt modes contributions.
Direct imaging and spectral characterization of exoplanets using extreme adaptive optics (ExAO) is a key science goal of future extremely large telescopes and space observatories. However, quasi-static wavefront errors will limit the sensitivity of this endeavor. Additional limitations for ground-based telescopes arise from residual AO-corrected atmospheric wavefront errors, generating millisecond-lifetime speckles that average into a halo over a long exposure. A solution to both of these problems is to use the science camera of an ExAO system as a wavefront sensor to perform a fast measurement and correction method to minimize these aberrations as soon as they are detected. We develop the framework for one such method based on the self-coherent camera (SCC) to be applied to ground-based telescopes, called Fast Atmospheric SCC Technique (FAST). We show that with the use of a specially designed coronagraph and coherent differential imaging algorithm, recording images every few milliseconds allows for a subtraction of atmospheric and static speckles while maintaining a close to unity algorithmic exoplanet throughput. Detailed simulations reach a contrast close to the photon noise limit after 30 seconds for a 1 % bandpass in H band on both 0$^text{th}$ and 5$^text{th}$ magnitude stars. For the 5th magnitude case, this is about 110 times better in raw contrast than what is currently achieved from ExAO instruments if we extrapolate for an hour of observing time, illustrating that sensitivity improvement from this method could play an essential role in the future detection and characterization of lower mass exoplanets.
A structural understanding of whole cells in three dimensions at high spatial resolution remains a significant challenge and, in the case of X-rays, has been limited by radiation damage. By alleviating this limitation, cryogenic coherent diffraction imaging (cryo-CDI) could bridge the important resolution gap between optical and electron microscopy in bio-imaging. Here, we report for the first time 3D cryo-CDI of a whole, frozen-hydrated cell - in this case a Neospora caninum tachyzoite - using 8 keV X-rays. Our 3D reconstruction reveals the surface and internal morphology of the cell, including its complex, polarized sub-cellular architecture with a 3D resolution of ~75-100 nm, which is presently limited by the coherent X-ray flux and detector size. Given the imminent improvement in the coherent X-ray flux at the facilities worldwide, our work forecasts the possibility of routine 3D imaging of frozen-hydrated cells with spatial resolutions in the tens of nanometres.
mmWave radars offer excellent depth resolution owing to their high bandwidth at mmWave radio frequencies. Yet, they suffer intrinsically from poor angular resolution, that is an order-of-magnitude worse than camera systems, and are therefore not a capable 3-D imaging solution in isolation. We propose Metamoran, a system that combines the complimentary strengths of radar and camera systems to obtain depth images at high azimuthal resolutions at distances of several tens of meters with high accuracy, all from a single fixed vantage point. Metamoran enables rich long-range depth imaging outdoors with applications to roadside safety infrastructure, surveillance and wide-area mapping. Our key insight is to use the high azimuth resolution from cameras using computer vision techniques, including image segmentation and monocular depth estimation, to obtain object shapes and use these as priors for our novel specular beamforming algorithm. We also design this algorithm to work in cluttered environments with weak reflections and in partially occluded scenarios. We perform a detailed evaluation of Metamorans depth imaging and sensing capabilities in 200 diverse scenes at a major U.S. city. Our evaluation shows that Metamoran estimates the depth of an object up to 60~m away with a median error of 28~cm, an improvement of 13$times$ compared to a naive radar+camera baseline and 23$times$ compared to monocular depth estimation.