ترغب بنشر مسار تعليمي؟ اضغط هنا

Compressive Sampling for Array Cameras

63   0   0.0 ( 0 )
 نشر من قبل David Brady
 تاريخ النشر 2019
  مجال البحث هندسة إلكترونية
والبحث باللغة English




اسأل ChatGPT حول البحث

While design of high performance lenses and image sensors has long been the focus of camera development, the size, weight and power of image data processing components is currently the primary barrier to radical improvements in camera resolution. Here we show that Deep-Learning- Aided Compressive Sampling (DLACS) can reduce operating power on camera-head electronics by 20x. Traditional compressive sampling has to date been primarily applied in the physical sensor layer, we show here that with aid from deep learning algorithms, compressive sampling offers unique power management advantages in digital layer compression.



قيم البحث

اقرأ أيضاً

109 - Xiao-Peng Jin 2021
Mid-wave infrared (MWIR) cameras for large number pixels are extremely expensive compared with their counterparts in visible light, thus, super-resolution imaging (SRI) for MWIR by increasing imaging pixels has always been a research hotspot in recen t years. Over the last decade, with the extensively investigation of the compressed sensing (CS) method, focal plane array (FPA) based compressive imaging in MWIR developed rapidly for SRI. This paper presents a long-distance super-resolution FPA compressive imaging in MWIR with improved calibration method and imaging effect. By the use of CS, we measure and calculate the calibration matrix of optical system efficiently and precisely, which improves the imaging contrast and signal-to-noise ratio(SNR) compared with previous work. We also achieved the 4x4 times super-resolution reconstruction of the long-distance objects which reaches the limit of the system design in our experiment.
We review camera architecture in the age of artificial intelligence. Modern cameras use physical components and software to capture, compress and display image data. Over the past 5 years, deep learning solutions have become superior to traditional a lgorithms for each of these functions. Deep learning enables 10-100x reduction in electrical sensor power per pixel, 10x improvement in depth of field and dynamic range and 10-100x improvement in image pixel count. Deep learning enables multiframe and multiaperture solutions that fundamentally shift the goals of physical camera design. Here we review the state of the art of deep learning in camera operations and consider the impact of AI on the physical design of cameras.
Compressive lensless imagers enable novel applications in an extremely compact device, requiring only a phase or amplitude mask placed close to the sensor. They have been demonstrated for 2D and 3D microscopy, single-shot video, and single-shot hyper spectral imaging; in each of these cases, a compressive-sensing-based inverse problem is solved in order to recover a 3D data-cube from a 2D measurement. Typically, this is accomplished using convex optimization and hand-picked priors. Alternatively, deep learning-based reconstruction methods offer the promise of better priors, but require many thousands of ground truth training pairs, which can be difficult or impossible to acquire. In this work, we propose the use of untrained networks for compressive image recovery. Our approach does not require any labeled training data, but instead uses the measurement itself to update the network weights. We demonstrate our untrained approach on lensless compressive 2D imaging as well as single-shot high-speed video recovery using the cameras rolling shutter, and single-shot hyperspectral imaging. We provide simulation and experimental verification, showing that our method results in improved image quality over existing methods.
Conventional approaches of sampling signals follow the celebrated theorem of Nyquist and Shannon. Compressive sampling, introduced by Donoho, Romberg and Tao, is a new paradigm that goes against the conventional methods in data acquisition and provid es a way of recovering signals using fewer samples than the traditional methods use. Here we suggest an alternative way of reconstructing the original signals in compressive sampling using EM algorithm. We first propose a naive approach which has certain computational difficulties and subsequently modify it to a new approach which performs better than the conventional methods of compressive sampling. The comparison of the different approaches and the performance of the new approach has been studied using simulated data.
Some pioneering works have investigated embedding cryptographic properties in compressive sampling (CS) in a way similar to one-time pad symmetric cipher. This paper tackles the problem of constructing a CS-based symmetric cipher under the key reuse circumstance, i.e., the cipher is resistant to common attacks even a fixed measurement matrix is used multiple times. To this end, we suggest a bi-level protected CS (BLP-CS) model which makes use of the advantage of the non-RIP measurement matrix construction. Specifically, two kinds of artificial basis mismatch techniques are investigated to construct key-related sparsifying bases. It is demonstrated that the encoding process of BLP-CS is simply a random linear projection, which is the same as the basic CS model. However, decoding the linear measurements requires knowledge of both the key-dependent sensing matrix and its sparsifying basis. The proposed model is exemplified by sampling images as a joint data acquisition and protection layer for resource-limited wireless sensors. Simulation results and numerical analyses have justified that the new model can be applied in circumstances where the measurement matrix can be re-used.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا