ترغب بنشر مسار تعليمي؟ اضغط هنا

OpenStreetMap: Challenges and Opportunities in Machine Learning and Remote Sensing

106   0   0.0 ( 0 )
 نشر من قبل Devis Tuia
 تاريخ النشر 2020
والبحث باللغة English




اسأل ChatGPT حول البحث

OpenStreetMap (OSM) is a community-based, freely available, editable map service that was created as an alternative to authoritative ones. Given that it is edited mainly by volunteers with different mapping skills, the completeness and quality of its annotations are heterogeneous across different geographical locations. Despite that, OSM has been widely used in several applications in {Geosciences}, Earth Observation and environmental sciences. In this work, we present a review of recent methods based on machine learning to improve and use OSM data. Such methods aim either 1) at improving the coverage and quality of OSM layers, typically using GIS and remote sensing technologies, or 2) at using the existing OSM layers to train models based on image data to serve applications like navigation or {land use} classification. We believe that OSM (as well as other sources of open land maps) can change the way we interpret remote sensing data and that the synergy with machine learning can scale participatory map making and its quality to the level needed to serve global and up-to-date land mapping.



قيم البحث

اقرأ أيضاً

Over the last few decades, deforestation and climate change have caused increasing number of forest fires. In Southeast Asia, Indonesia has been the most affected country by tropical peatland forest fires. These fires have a significant impact on the climate resulting in extensive health, social and economic issues. Existing forest fire prediction systems, such as the Canadian Forest Fire Danger Rating System, are based on handcrafted features and require installation and maintenance of expensive instruments on the ground, which can be a challenge for developing countries such as Indonesia. We propose a novel, cost-effective, machine-learning based approach that uses remote sensing data to predict forest fires in Indonesia. Our prediction model achieves more than 0.81 area under the receiver operator characteristic (ROC) curve, performing significantly better than the baseline approach which never exceeds 0.70 area under ROC curve on the same tasks. Our models performance remained above 0.81 area under ROC curve even when evaluated with reduced data. The results support our claim that machine-learning based approaches can lead to reliable and cost-effective forest fire prediction systems.
198 - Dane Morgan , Ryan Jacobs 2020
Advances in machine learning have impacted myriad areas of materials science, ranging from the discovery of novel materials to the improvement of molecular simulations, with likely many more important developments to come. Given the rapid changes in this field, it is challenging to understand both the breadth of opportunities as well as best practices for their use. In this review, we address aspects of both problems by providing an overview of the areas where machine learning has recently had significant impact in materials science, and then provide a more detailed discussion on determining the accuracy and domain of applicability of some common types of machine learning models. Finally, we discuss some opportunities and challenges for the materials community to fully utilize the capabilities of machine learning.
Classification and identification of the materials lying over or beneath the Earths surface have long been a fundamental but challenging research topic in geoscience and remote sensing (RS) and have garnered a growing concern owing to the recent adva ncements of deep learning techniques. Although deep networks have been successfully applied in single-modality-dominated classification tasks, yet their performance inevitably meets the bottleneck in complex scenes that need to be finely classified, due to the limitation of information diversity. In this work, we provide a baseline solution to the aforementioned difficulty by developing a general multimodal deep learning (MDL) framework. In particular, we also investigate a special case of multi-modality learning (MML) -- cross-modality learning (CML) that exists widely in RS image classification applications. By focusing on what, where, and how to fuse, we show different fusion strategies as well as how to train deep networks and build the network architecture. Specifically, five fusion architectures are introduced and developed, further being unified in our MDL framework. More significantly, our framework is not only limited to pixel-wise classification tasks but also applicable to spatial information modeling with convolutional neural networks (CNNs). To validate the effectiveness and superiority of the MDL framework, extensive experiments related to the settings of MML and CML are conducted on two different multimodal RS datasets. Furthermore, the codes and datasets will be available at https://github.com/danfenghong/IEEE_TGRS_MDL-RS, contributing to the RS community.
In this work we consider the application of convolutional neural networks (CNNs) for pixel-wise labeling (a.k.a., semantic segmentation) of remote sensing imagery (e.g., aerial color or hyperspectral imagery). Remote sensing imagery is usually stored in the form of very large images, referred to as tiles, which are too large to be segmented directly using most CNNs and their associated hardware. As a result, during label inference, smaller sub-images, called patches, are processed individually and then stitched (concatenated) back together to create a tile-sized label map. This approach suffers from computational ineffiency and can result in discontinuities at output boundaries. We propose a simple alternative approach in which the input size of the CNN is dramatically increased only during label inference. This does not avoid stitching altogether, but substantially mitigates its limitations. We evaluate the performance of the proposed approach against a vonventional stitching approach using two popular segmentation CNN models and two large-scale remote sensing imagery datasets. The results suggest that the proposed approach substantially reduces label inference time, while also yielding modest overall label accuracy increases. This approach contributed to our wining entry (overall performance) in the INRIA building labeling competition.
Existing remote sensing change detection methods are heavily affected by seasonal variation. Since vegetation colors are different between winter and summer, such variations are inclined to be falsely detected as changes. In this letter, we proposed an image translation method to solve the problem. A style-based recalibration module is introduced to capture seasonal features effectively. Then, a new style discriminator is designed to improve the translation performance. The discriminator can not only produce a decision for the fake or real sample, but also return a style vector according to the channel-wise correlations. Extensive experiments are conducted on season-varying dataset. The experimental results show that the proposed method can effectively perform image translation, thereby consistently improving the season-varying image change detection performance. Our codes and data are available at https://github.com/summitgao/RSIT_SRM_ISD.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا