No Arabic abstract
Super-resolution fluorescence microscopy is an important tool in biomedical research for its ability to discern features smaller than the diffraction limit. However, due to its difficult implementation and high cost, the universal application of super-resolution microscopy is not feasible. In this paper, we propose and demonstrate a new kind of super-resolution fluorescence microscopy that can be easily implemented and requires neither additional hardware nor complex post-processing. The microscopy is based on the principle of stepwise optical saturation (SOS), where $M$ steps of raw fluorescence images are linearly combined to generate an image with a $sqrt{M}$-fold increase in resolution compared with conventional diffraction-limited images. For example, linearly combining (scaling and subtracting) two images obtained at regular powers extends resolution by a factor of $1.4$ beyond the diffraction limit. The resolution improvement in SOS microscopy is theoretically infinite but practically is limited by the signal-to-noise ratio. We perform simulations and experimentally demonstrate super-resolution microscopy with both one-photon (confocal) and multiphoton excitation fluorescence. We show that with the multiphoton modality, the SOS microscopy can provide super-resolution imaging deep in scattering samples.
Fluorescence microscopy is widely used in biological imaging, however scattering from tissues strongly limits its applicability to a shallow depth. In this work we adapt a methodology inspired from stellar speckle interferometry, and exploit the optical memory effect to enable fluorescence microscopy through a turbid layer. We demonstrate efficient reconstruction of micrometer-size fluorescent objects behind a scattering medium in epi-microscopy, and study the specificities of this imaging modality (magnification, field of view, resolution) as compared to traditional microscopy. Using a modified phase retrieval algorithm to reconstruct fluorescent objects from speckle images, we demonstrate robust reconstructions even in relatively low signal to noise conditions. This modality is particularly appropriate for imaging in biological media, which are known to exhibit relatively large optical memory ranges compatible with tens of micrometers size field of views, and large spectral bandwidths compatible with emission fluorescence spectra of tens of nanometers widths.
In chiral sum frequency generation (C-SFG), the chiral nature of ${chi}^{(2)}$ requires the three involved electric fields to be pairwise non-parallel, leading to the traditional non-collinear configuration which is a hindrance for achieving diffraction limited resolution while utilizing it as a label-free imaging contrast mechanism . Here we propose a collinear C-SFG (CC-SFG) microscopy modality by using longitudinal z-polarized vectorial field. Label-free chiral imaging with enhanced spatial resolution (~1.4 times improvement in one lateral and the longitudinal directions over the traditional non-collinear scheme) is demonstrated, providing a new path for SFG microscopy with diffraction-limited resolution for mapping chirality.
Diffraction unlimited super-resolution imaging critically depends on the switching of fluorophores between at least two states, often induced using intense laser light and special buffers. The high illumination power or UV light required for appropriate blinking kinetics is currently hindering live-cell experiments. Recently, so-called self-blinking dyes that switch spontaneously between an open, fluorescent on-state and a closed colorless off-state were introduced. Here we exploit the synergy between super-resolution optical fluctuation imaging (SOFI) and spontaneously switching fluorophores for 2D functional and for volumetric imaging. SOFI tolerates high labeling densities, on-time ratios, and low signal-to-noise by analyzing higher-order statistics of a few hundred to thousand frames of stochastically blinking fluorophores. We demonstrate 2D imaging of fixed cells with a uniform resolution up to 50-60 nm in 6th order SOFI and characterize changing experimental conditions. We extend multiplane cross-correlation analysis to 4th order using biplane and 8-plane volumetric imaging achieving up to 29 (virtual) planes. The low laser excitation intensities needed for self-blinking SOFI are ideal for live-cell imaging. We show proof-of-principal time-resolved imaging by observing slow membrane movements in cells. Self-blinking SOFI provides a route for easy-to-use 2D and 3D high-resolution functional imaging that is robust against artefacts and suitable for live-cell imaging.
Fluorescence microscopy has enabled a dramatic development in modern biology by visualizing biological organisms with micrometer scale resolution. However, due to the diffraction limit, sub-micron/nanometer features are difficult to resolve. While various super-resolution techniques are developed to achieve nanometer-scale resolution, they often either require expensive optical setup or specialized fluorophores. In recent years, deep learning has shown the potentials to reduce the technical barrier and obtain super-resolution from diffraction-limited images. For accurate results, conventional deep learning techniques require thousands of images as a training dataset. Obtaining large datasets from biological samples is not often feasible due to the photobleaching of fluorophores, phototoxicity, and dynamic processes occurring within the organism. Therefore, achieving deep learning-based super-resolution using small datasets is challenging. We address this limitation with a new convolutional neural network-based approach that is successfully trained with small datasets and achieves super-resolution images. We captured 750 images in total from 15 different field-of-views as the training dataset to demonstrate the technique. In each FOV, a single target image is generated using the super-resolution radial fluctuation method. As expected, this small dataset failed to produce a usable model using traditional super-resolution architecture. However, using the new approach, a network can be trained to achieve super-resolution images from this small dataset. This deep learning model can be applied to other biomedical imaging modalities such as MRI and X-ray imaging, where obtaining large training datasets is challenging.
One of the main characteristics of optical imaging systems is the spatial resolution, which is restricted by the diffraction limit to approximately half the wavelength of the incident light. Along with the recently developed classical super-resolution techniques, which aim at breaking the diffraction limit in classical systems, there is a class of quantum super-resolution techniques which leverage the non-classical nature of the optical signals radiated by quantum emitters, the so-called antibunching super-resolution microscopy. This approach can ensure a factor of $sqrt{n}$ improvement in the spatial resolution by measuring the n-th order autocorrelation function. The main bottleneck of the antibunching super-resolution microscopy is the time-consuming acquisition of multi-photon event histograms. We present a machine learning-assisted approach for the realization of rapid antibunching super-resolution imaging and demonstrate 12 times speed-up compared to conventional, fitting-based autocorrelation measurements. The developed framework paves the way to the practical realization of scalable quantum super-resolution imaging devices that can be compatible with various types of quantum emitters.