ترغب بنشر مسار تعليمي؟ اضغط هنا

Sky pixel detection in outdoor imagery using an adaptive algorithm and machine learning

298   0   0.0 ( 0 )
 نشر من قبل Kerry A Nice
 تاريخ النشر 2019
والبحث باللغة English




اسأل ChatGPT حول البحث

Computer vision techniques enable automated detection of sky pixels in outdoor imagery. In urban climate, sky detection is an important first step in gathering information about urban morphology and sky view factors. However, obtaining accurate results remains challenging and becomes even more complex using imagery captured under a variety of lighting and weather conditions. To address this problem, we present a new sky pixel detection system demonstrated to produce accurate results using a wide range of outdoor imagery types. Images are processed using a selection of mean-shift segmentation, K-means clustering, and Sobel filters to mark sky pixels in the scene. The algorithm for a specific image is chosen by a convolutional neural network, trained with 25,000 images from the Skyfinder data set, reaching 82% accuracy for the top three classes. This selection step allows the sky marking to follow an adaptive process and to use different techniques and parameters to best suit a particular image. An evaluation of fourteen different techniques and parameter sets shows that no single technique can perform with high accuracy across varied Skyfinder and Google Street View data sets. However, by using our adaptive process, large increases in accuracy are observed. The resulting system is shown to perform better than other published techniques.



قيم البحث

اقرأ أيضاً

The increasing level of marine plastic pollution poses severe threats to the marine ecosystem and biodiversity. The present study attempted to explore the full functionality of open Sentinel satellite data and ML models for detecting and classifying floating plastic debris in Mytilene (Greece), Limassol (Cyprus), Calabria (Italy), and Beirut (Lebanon). Two ML models, i.e. Support Vector Machine (SVM) and Random Forest (RF) were utilized to carry out the classification analysis. In-situ plastic location data was collected from the control experiment conducted in Mytilene, Greece and Limassol, Cyprus, and the same was considered for training the models. Both remote sensing bands and spectral indices were used for developing the ML models. A spectral signature profile for plastic was created for discriminating the floating plastic from other marine debris. A newly developed index, kernel Normalized Difference Vegetation Index (kNDVI), was incorporated into the modelling to examine its contribution to model performances. Both SVM and RF were performed well in five models and test case combinations. Among the two ML models, the highest performance was measured for the RF. The inclusion of kNDVI was found effective and increased the model performances, reflected by high balanced accuracy measured for model 2 (~80% to ~98 % for SVM and ~87% to ~97 % for RF). Using the best-performed model, an automated floating plastic detection system was developed and tested in Calabria and Beirut. For both sites, the trained model had detected the floating plastic with ~99% accuracy. Among the six predictors, the FDI was found the most important variable for detecting marine floating plastic. These findings collectively suggest that high-resolution remote sensing imagery and the automated ML models can be an effective alternative for the cost-effective detection of marine floating plastic.
80 - Ke Xu , Kaiyu Guan , Jian Peng 2019
Detecting and masking cloud and cloud shadow from satellite remote sensing images is a pervasive problem in the remote sensing community. Accurate and efficient detection of cloud and cloud shadow is an essential step to harness the value of remotely sensed data for almost all downstream analysis. DeepMask, a new algorithm for cloud and cloud shadow detection in optical satellite remote sensing imagery, is proposed in this study. DeepMask utilizes ResNet, a deep convolutional neural network, for pixel-level cloud mask generation. The algorithm is trained and evaluated on the Landsat 8 Cloud Cover Assessment Validation Dataset distributed across 8 different land types. Compared with CFMask, the most widely used cloud detection algorithm, land-type-specific DeepMask models achieve higher accuracy across all land types. The average accuracy is 93.56%, compared with 85.36% from CFMask. DeepMask also achieves 91.02% accuracy on all-land-type dataset. Compared with other CNN-based cloud mask algorithms, DeepMask benefits from the parsimonious architecture and the residual connection of ResNet. It is compatible with input of any size and shape. DeepMask still maintains high performance when using only red, green, blue, and NIR bands, indicating its potential to be applied to other satellite platforms that only have limited optical bands.
While annotated images for change detection using satellite imagery are scarce and costly to obtain, there is a wealth of unlabeled images being generated every day. In order to leverage these data to learn an image representation more adequate for c hange detection, we explore methods that exploit the temporal consistency of Sentinel-2 times series to obtain a usable self-supervised learning signal. For this, we build and make publicly available (https://zenodo.org/record/4280482) the Sentinel-2 Multitemporal Cities Pairs (S2MTCP) dataset, containing multitemporal image pairs from 1520 urban areas worldwide. We test the results of multiple self-supervised learning methods for pre-training models for change detection and apply it on a public change detection dataset made of Sentinel-2 image pairs (OSCD).
To accelerate deep CNN models, this paper proposes a novel spatially adaptive framework that can dynamically generate pixel-wise sparsity according to the input image. The sparse scheme is pixel-wise refined, regional adaptive under a unified importa nce map, which makes it friendly to hardware implementation. A sparse controlling method is further presented to enable online adjustment for applications with different precision/latency requirements. The sparse model is applicable to a wide range of vision tasks. Experimental results show that this method efficiently improve the computing efficiency for both image classification using ResNet-18 and super resolution using SRResNet. On image classification task, our method can save 30%-70% MACs with a slightly drop in top-1 and top-5 accuracy. On super resolution task, our method can reduce more than 90% MACs while only causing around 0.1 dB and 0.01 decreasing in PSNR and SSIM. Hardware validation is also included.
The world is facing a huge health crisis due to the rapid transmission of coronavirus (COVID-19). Several guidelines were issued by the World Health Organization (WHO) for protection against the spread of coronavirus. According to WHO, the most effec tive preventive measure against COVID-19 is wearing a mask in public places and crowded areas. It is very difficult to monitor people manually in these areas. In this paper, a transfer learning model is proposed to automate the process of identifying the people who are not wearing mask. The proposed model is built by fine-tuning the pre-trained state-of-the-art deep learning model, InceptionV3. The proposed model is trained and tested on the Simulated Masked Face Dataset (SMFD). Image augmentation technique is adopted to address the limited availability of data for better training and testing of the model. The model outperformed the other recently proposed approaches by achieving an accuracy of 99.9% during training and 100% during testing.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا