Do you want to publish a course? Click here

Design of low-cost, compact and weather-proof whole sky imagers for high-dynamic-range captures

65   0   0.0 ( 0 )
 Added by Soumyabrata Dev
 Publication date 2017
and research's language is English




Ask ChatGPT about the research

Ground-based whole sky imagers are popular for monitoring cloud formations, which is necessary for various applications. We present two new Wide Angle High-Resolution Sky Imaging System (WAHRSIS) models, which were designed especially to withstand the hot and humid climate of Singapore. The first uses a fully sealed casing, whose interior temperature is regulated using a Peltier cooler. The second features a double roof design with ventilation grids on the sides, allowing the outside air to flow through the device. Measurements of temperature inside these two devices show their ability to operate in Singapore weather conditions. Unlike our original WAHRSIS model, neither uses a mechanical sun blocker to prevent the direct sunlight from reaching the camera; instead they rely on high-dynamic-range imaging (HDRI) techniques to reduce the glare from the sun.



rate research

Read More

Ground-based Whole Sky Imagers (WSIs) are increasingly being used for various remote sensing applications. While the fundamental requirements of a WSI are to make it climate-proof with an ability to capture high resolution images, cost also plays a significant role for wider scale adoption. This paper proposes an extremely low-cost alternative to the existing WSIs. In the designed model, high resolution images are captured with auto adjusting shutter speeds based on the surrounding light intensity. Furthermore, a manual data backup option using a portable memory drive is implemented for remote locations with no internet access.
Cloud imaging using ground-based whole sky imagers is essential for a fine-grained understanding of the effects of cloud formations, which can be useful in many applications. Some such imagers are available commercially, but their cost is relatively high, and their flexibility is limited. Therefore, we built a new daytime Whole Sky Imager (WSI) called Wide Angle High-Resolution Sky Imaging System. The strengths of our new design are its simplicity, low manufacturing cost and high resolution. Our imager captures the entire hemisphere in a single high-resolution picture via a digital camera using a fish-eye lens. The camera was modified to capture light across the visible as well as the near-infrared spectral ranges. This paper describes the design of the device as well as the geometric and radiometric calibration of the imaging system.
Ground-based whole sky imagers (WSIs) can provide localized images of the sky of high temporal and spatial resolution, which permits fine-grained cloud observation. In this paper, we show how images taken by WSIs can be used to estimate solar radiation. Sky cameras are useful here because they provide additional information about cloud movement and coverage, which are otherwise not available from weather station data. Our setup includes ground-based weather stations at the same location as the imagers. We use their measurements to validate our methods.
Sky/cloud images obtained from ground-based sky-cameras are usually captured using a fish-eye lens with a wide field of view. However, the sky exhibits a large dynamic range in terms of luminance, more than a conventional camera can capture. It is thus difficult to capture the details of an entire scene with a regular camera in a single shot. In most cases, the circumsolar region is over-exposed, and the regions near the horizon are under-exposed. This renders cloud segmentation for such images difficult. In this paper, we propose HDRCloudSeg -- an effective method for cloud segmentation using High-Dynamic-Range (HDR) imaging based on multi-exposure fusion. We describe the HDR image generation process and release a new database to the community for benchmarking. Our proposed approach is the first using HDR radiance maps for cloud segmentation and achieves very good results.
106 - Qian Ye , Jun Xiao , Kin-man Lam 2021
This paper considers the problem of generating an HDR image of a scene from its LDR images. Recent studies employ deep learning and solve the problem in an end-to-end fashion, leading to significant performance improvements. However, it is still hard to generate a good quality image from LDR images of a dynamic scene captured by a hand-held camera, e.g., occlusion due to the large motion of foreground objects, causing ghosting artifacts. The key to success relies on how well we can fuse the input images in their feature space, where we wish to remove the factors leading to low-quality image generation while performing the fundamental computations for HDR image generation, e.g., selecting the best-exposed image/region. We propose a novel method that can better fuse the features based on two ideas. One is multi-step feature fusion; our network gradually fuses the features in a stack of blocks having the same structure. The other is the design of the component block that effectively performs two operations essential to the problem, i.e., comparing and selecting appropriate images/regions. Experimental results show that the proposed method outperforms the previous state-of-the-art methods on the standard benchmark tests.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا