ترغب بنشر مسار تعليمي؟ اضغط هنا

Smart Cameras

167   0   0.0 ( 0 )
 نشر من قبل David Brady
 تاريخ النشر 2020
  مجال البحث هندسة إلكترونية
والبحث باللغة English




اسأل ChatGPT حول البحث

We review camera architecture in the age of artificial intelligence. Modern cameras use physical components and software to capture, compress and display image data. Over the past 5 years, deep learning solutions have become superior to traditional algorithms for each of these functions. Deep learning enables 10-100x reduction in electrical sensor power per pixel, 10x improvement in depth of field and dynamic range and 10-100x improvement in image pixel count. Deep learning enables multiframe and multiaperture solutions that fundamentally shift the goals of physical camera design. Here we review the state of the art of deep learning in camera operations and consider the impact of AI on the physical design of cameras.



قيم البحث

اقرأ أيضاً

While design of high performance lenses and image sensors has long been the focus of camera development, the size, weight and power of image data processing components is currently the primary barrier to radical improvements in camera resolution. Her e we show that Deep-Learning- Aided Compressive Sampling (DLACS) can reduce operating power on camera-head electronics by 20x. Traditional compressive sampling has to date been primarily applied in the physical sensor layer, we show here that with aid from deep learning algorithms, compressive sampling offers unique power management advantages in digital layer compression.
Mask-based lensless cameras replace the lens of a conventional camera with a custom mask. These cameras can potentially be very thin and even flexible. Recently, it has been demonstrated that such mask-based cameras can recover light intensity and de pth information of a scene. Existing depth recovery algorithms either assume that the scene consists of a small number of depth planes or solve a sparse recovery problem over a large 3D volume. Both these approaches fail to recover the scenes with large depth variations. In this paper, we propose a new approach for depth estimation based on an alternating gradient descent algorithm that jointly estimates a continuous depth map and light distribution of the unknown scene from its lensless measurements. We present simulation results on image and depth reconstruction for a variety of 3D test scenes. A comparison between the proposed algorithm and other method shows that our algorithm is more robust for natural scenes with a large range of depths. We built a prototype lensless camera and present experimental results for reconstruction of intensity and depth maps of different real objects.
Health records data security is one of the main challenges in e-health systems. Authentication is one of the essential security services to support the stored data confidentiality, integrity, and availability. This research focuses on the secure stor age of patient and medical records in the healthcare sector where data security and unauthorized access is an ongoing issue. A potential solution comes from biometrics, although their use may be time-consuming and can slow down data retrieval. This research aims to overcome these challenges and enhance data access control in the healthcare sector through the addition of biometrics in the form of fingerprints. The proposed model for application in the healthcare sector consists of Collection, Network communication, and Authentication (CNA) using biometrics, which replaces an existing password-based access control method. A sensor then collects data and by using a network (wireless or Zig-bee), a connection is established, after connectivity analytics and data management work which processes and aggregate the data. Subsequently, access is granted to authenticated users of the application. This IoT-based biometric authentication system facilitates effective recognition and ensures confidentiality, integrity, and reliability of patients, records and other sensitive data. The proposed solution provides reliable access to healthcare data and enables secure access through the process of user and device authentication. The proposed model has been developed for access control to data through the authentication of users in healthcare to reduce data manipulation or theft.
The record-breaking achievements of deep neural networks (DNNs) in image classification and detection tasks resulted in a surge of new computer vision applications during the past years. However, their computational complexity is restricting their de ployment to powerful stationary or complex dedicated processing hardware, limiting their use in smart edge processing applications. We propose an automated deployment framework for DNN acceleration at the edge on field-programmable gate array (FPGA)-based cameras. The framework automatically converts an arbitrary-sized and quantized trained network into an efficient streaming-processing IP block that is instantiated within a generic adapter block in the FPGA. In contrast to prior work, the accelerator is purely logic and thus supports end-to-end processing on FPGAs without on-chip microprocessors. Our mapping tool features automatic translation from a trained Caffe network, arbitrary layer-wise fixed-point precision for both weights and activations, an efficient XNOR implementation for fully binary layers as well as a balancing mechanism for effective allocation of computational resources in the streaming dataflow. To present the performance of the system we employ this tool to implement two CNN edge processing networks on an FPGA-based high-speed camera with various precision settings showing computational throughputs of up to 337GOPS in low-latency streaming mode (no batching), running entirely on the camera.
Cassegrain designs can be used to build thin lenses. We analyze the relationships between system thickness and aperture sizes of the two mirrors as well as FoV size. Our analysis shows that decrease in lens thickness imposes tight constraint on the a perture and FoV size. To mitigate this limitation, we propose to fill the gaps between the primary and the secondary with high index material. The Gassegrain optics cuts the track length into half and high index material reduces ray angle and height, consequently the incident ray angle can be increased, i.e., the FoV angle is extended. Defining telephoto ratio as the ratio of lens thickness to focal length, we achieve telephoto ratios as small as 0.43 for a visible Cassegrain thin lens and 1.20 for an infrared Cassegrain thin lens. To achieve an arbitrary FoV coverage, we present an strategy by integrating multiple thin lenses on one plane with each unit covering a different FoV region. To avoid physically tilting each unit, we propose beam steering with metasurface. By image stitching, we obtain wide FoV images.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا