No Arabic abstract
When light travels through scattering media, speckles (spatially random distribution of fluctuated intensities) are formed due to the interference of light travelling along different optical paths, preventing the perception of structure, absolute location and dimension of a target within or on the other side of the medium. Currently, the prevailing techniques such as wavefront shaping, optical phase conjugation, scattering matrix measurement, and speckle autocorrelation imaging can only picture the target structure in the absence of prior information. Here we show that a scattering medium can be conceptualized as an assembly of randomly packed pinhole cameras, and the corresponding speckle pattern is a superposition of randomly shifted pinhole images. This provides a new perspective to bridge target, scattering medium, and speckle pattern, allowing one to localize and profile a target quantitatively from speckle patterns perceived from the other side of the scattering medium, which is impossible with all existing methods. The method also allows us to interpret some phenomena of diffusive light that are otherwise challenging to understand. For example, why the morphological appearance of speckle patterns changes with the target, why information is difficult to be extracted from thick scattering media, and what determines the capability of seeing through scattering media. In summary, the concept, whilst in its infancy, opens a new door to unveiling scattering media and information extraction from scattering media in real time.
Cassegrain designs can be used to build thin lenses. We analyze the relationships between system thickness and aperture sizes of the two mirrors as well as FoV size. Our analysis shows that decrease in lens thickness imposes tight constraint on the aperture and FoV size. To mitigate this limitation, we propose to fill the gaps between the primary and the secondary with high index material. The Gassegrain optics cuts the track length into half and high index material reduces ray angle and height, consequently the incident ray angle can be increased, i.e., the FoV angle is extended. Defining telephoto ratio as the ratio of lens thickness to focal length, we achieve telephoto ratios as small as 0.43 for a visible Cassegrain thin lens and 1.20 for an infrared Cassegrain thin lens. To achieve an arbitrary FoV coverage, we present an strategy by integrating multiple thin lenses on one plane with each unit covering a different FoV region. To avoid physically tilting each unit, we propose beam steering with metasurface. By image stitching, we obtain wide FoV images.
Line segment detection is essential for high-level tasks in computer vision and robotics. Currently, most stateof-the-art (SOTA) methods are dedicated to detecting straight line segments in undistorted pinhole images, thus distortions on fisheye or spherical images may largely degenerate their performance. Targeting at the unified line segment detection (ULSD) for both distorted and undistorted images, we propose to represent line segments with the Bezier curve model. Then the line segment detection is tackled by the Bezier curve regression with an end-to-end network, which is model-free and without any undistortion preprocessing. Experimental results on the pinhole, fisheye, and spherical image datasets validate the superiority of the proposed ULSD to the SOTA methods both in accuracy and efficiency (40.6fps for pinhole images). The source code is available at https://github.com/lh9171338/Unified-LineSegment-Detection.
This paper presents a generic 6DOF camera pose estimation method, which can be used for both the pinhole camera and the fish-eye camera. Different from existing methods, relative positions of 3D points rather than absolute coordinates in the world coordinate system are employed in our method, and it has a unique solution. The application scope of POSIT (Pose from Orthography and Scaling with Iteration) algorithm is generalized to fish-eye cameras by combining with the radially symmetric projection model. The image point relationship between the pinhole camera and the fish-eye camera is derived based on their projection model. The general pose expression which fits for different cameras can be acquired by four noncoplanar object points and their corresponding image points. Accurate estimation results are calculated iteratively. Experimental results on synthetic and real data show that the pose estimation results of our method are more stable and accurate than state-of-the-art methods. The source code is available at https://github.com/k032131/EPOSIT.
Super-resolution imaging with advanced optical systems has been revolutionizing technical analysis in various fields from biological to physical sciences. However, many objects are hidden by strongly scattering media such as rough wall corners or biological tissues that scramble light paths, create speckle patterns and hinder objects visualization, let alone super-resolution imaging. Here, we realize a method to do non-invasive super-resolution imaging through scattering media based on stochastic optical scattering localization imaging (SOSLI) technique. Simply by capturing multiple speckle patterns of photo-switchable emitters in our demonstration, the stochastic approach utilizes the speckle correlation properties of scattering media to retrieve an image with more than five-fold resolution enhancement compared to the diffraction limit, while posing no fundamental limit in achieving higher spatial resolution. More importantly, we demonstrate our SOSLI to do non-invasive super-resolution imaging through not only optical diffusers, i.e. static scattering media, but also biological tissues, i.e. dynamic scattering media with decorrelation of up to 80%. Our approach paves the way to non-invasively visualize various samples behind scattering media at unprecedented levels of detail.
The lensless pinhole camera is perhaps the earliest and simplest form of an imaging system using only a pinhole-sized aperture in place of a lens. They can capture an infinite depth-of-field and offer greater freedom from optical distortion over their lens-based counterparts. However, the inherent limitations of a pinhole system result in lower sharpness from blur caused by optical diffraction and higher noise levels due to low light throughput of the small aperture, requiring very long exposure times to capture well-exposed images. In this paper, we explore an image restoration pipeline using deep learning and domain-knowledge of the pinhole system to enhance the pinhole image quality through a joint denoise and deblur approach. Our approach allows for more practical exposure times for hand-held photography and provides higher image quality, making it more suitable for daily photography compared to other lensless cameras while keeping size and cost low. This opens up the potential of pinhole cameras to be used in smaller devices, such as smartphones.