Do you want to publish a course? Click here

Sparsity-based single-shot sub-wavelength coherent diffractive imaging

120   0   0.0 ( 0 )
 Added by Alexander Szameit
 Publication date 2011
  fields Physics
and research's language is English




Ask ChatGPT about the research

We present the experimental reconstruction of sub-wavelength features from the far-field intensity of sparse optical objects: sparsity-based sub-wavelength imaging combined with phase-retrieval. As examples, we demonstrate the recovery of random and ordered arrangements of 100 nm features with the resolution of 30 nm, with an illuminating wavelength of 532 nm. Our algorithmic technique relies on minimizing the number of degrees of freedom; it works in real-time, requires no scanning, and can be implemented in all existing microscopes - optical and non-optical.



rate research

Read More

160 - Wenlin Gong , , Shensheng Han 2013
For conventional imaging, the imaging resolution limit is given by the Rayleigh criterion. Exploiting the prior knowledge of imaging objects sparsity and fixed optical system, imaging beyond the conventional Rayleigh limit, which is backed up by numerical simulation and experiments, is achieved by illuminating the object with single-shot thermal light and detecting the objects information at the imaging plane with some sparse-array single-pixel detectors. The quality of sub-Rayleigh imaging with sparse detection is also shown to be related to the effective number of single-pixel detectors and the detection signal-to-noise ratio at the imaging plane.
Attosecond science has been transforming our understanding of electron dynamics in atoms, molecules and solids. However, to date almost all of the attoscience experiments have been based on spectroscopic measurements because attosecond pulses have intrinsically very broad spectra due to the uncertainty principle and are incompatible with conventional imaging systems. Here we report an important advance towards achieving attosecond coherent diffractive imaging. Using simulated attosecond pulses, we simultaneously reconstruct the spectrum, 17 probes and 17 spectral images of extended objects from a set of ptychographic diffraction patterns. We further confirm the principle and feasibility of this method by successfully performing a ptychographic coherent diffractive imaging experiment using a light-emitting diode with a broad spectrum. We believe this work clear the way to an unexplored domain of attosecond imaging science, which could have a far-reaching impact across different disciplines.
The inversion of a diffraction pattern offers aberration-free diffraction-limited 3D images without the resolution and depth-of-field limitations of lens-based tomographic systems, the only limitation being radiation damage. We review our experimental results, discuss the fundamental limits of this technique and future plans.
Imaging depth and spectrum have been extensively studied in isolation from each other for decades. Recently, hyperspectral-depth (HS-D) imaging emerges to capture both information simultaneously by combining two different imaging systems; one for depth, the other for spectrum. While being accurate, this combinational approach induces increased form factor, cost, capture time, and alignment/registration problems. In this work, departing from the combinational principle, we propose a compact single-shot monocular HS-D imaging method. Our method uses a diffractive optical element (DOE), the point spread function of which changes with respect to both depth and spectrum. This enables us to reconstruct spectrum and depth from a single captured image. To this end, we develop a differentiable simulator and a neural-network-based reconstruction that are jointly optimized via automatic differentiation. To facilitate learning the DOE, we present a first HS-D dataset by building a benchtop HS-D imager that acquires high-quality ground truth. We evaluate our method with synthetic and real experiments by building an experimental prototype and achieve state-of-the-art HS-D imaging results.
The applications of present nanoscopy techniques for live cell imaging are limited by the long sampling time and low emitter density. Here we developed a new single frame wide-field nanoscopy based on ghost imaging via sparsity constraints (GISC Nanoscopy), in which a spatial random phase modulator is applied in a wide-field microscopy to achieve random measurement for fluorescence signals. This new method can effectively utilize the sparsity of fluorescence emitters to dramatically enhance the imaging resolution to 80 nm by compressive sensing (CS) reconstruction for one raw image. The ultra-high emitter density of 143 {mu}m-2 has been achieved while the precision of single-molecule localization below 25 nm has been maintained. Thereby working with high-density of photo-switchable fluorophores GISC nanoscopy can reduce orders of magnitude sampling frames compared with previous single-molecule localization based super-resolution imaging methods.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا