Do you want to publish a course? Click here

Deep-learning coupled with novel classification method to classify the urban environment of the developing world

63   0   0.0 ( 0 )
 Added by Anis Sarker
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

Rapid globalization and the interdependence of humanity that engender tremendous in-flow of human migration towards the urban spaces. With advent of high definition satellite images, high resolution data, computational methods such as deep neural network, capable hardware; urban planning is seeing a paradigm shift. Legacy data on urban environments are now being complemented with high-volume, high-frequency data. In this paper we propose a novel classification method that is readily usable for machine analysis and show applicability of the methodology on a developing world setting. The state-of-the-art is mostly dominated by classification of building structures, building types etc. and largely represents the developed world which are insufficient for developing countries such as Bangladesh where the surrounding is crucial for the classification. Moreover, the traditional methods propose small-scale classifications, which give limited information with poor scalability and are slow to compute. We categorize the urban area in terms of informal and formal spaces taking the surroundings into account. 50 km x 50 km Google Earth image of Dhaka, Bangladesh was visually annotated and categorized by an expert. The classification is based broadly on two dimensions: urbanization and the architectural form of urban environment. Consequently, the urban space is divided into four classes: 1) highly informal; 2) moderately informal; 3) moderately formal; and 4) highly formal areas. In total 16 sub-classes were identified. For semantic segmentation, Googles DeeplabV3+ model was used which increases the field of view of the filters to incorporate larger context. Image encompassing 70% of the urban space was used for training and the remaining 30% was used for testing and validation. The model is able to segment with 75% accuracy and 60% Mean IoU.



rate research

Read More

There is a prevailing trend to study urban morphology quantitatively thanks to the growing accessibility to various forms of spatial big data, increasing computing power, and use cases benefiting from such information. The methods developed up to now measure urban morphology with numerical indices describing density, proportion, and mixture, but they do not directly represent morphological features from the humans visual and intuitive perspective. We take the first step to bridge the gap by proposing a deep learning-based technique to automatically classify road networks into four classes on a visual basis. The method is implemented by generating an image of the street network (Colored Road Hierarchy Diagram), which we introduce in this paper, and classifying it using a deep convolutional neural network (ResNet-34). The model achieves an overall classification accuracy of 0.875. Nine cities around the world are selected as the study areas with their road networks acquired from OpenStreetMap. Latent subgroups among the cities are uncovered through clustering on the percentage of each road network category. In the subsequent part of the paper, we focus on the usability of such classification: we apply our method in a case study of urban vitality prediction. An advanced tree-based regression model (LightGBM) is for the first time designated to establish the relationship between morphological indices and vitality indicators. The effect of road network classification is found to be small but positively associated with urban vitality. This work expands the toolkit of quantitative urban morphology study with new techniques, supporting further studies in the future.
Object proposals have become an integral preprocessing steps of many vision pipelines including object detection, weakly supervised detection, object discovery, tracking, etc. Compared to the learning-free methods, learning-based proposals have become popular recently due to the growing interest in object detection. The common paradigm is to learn object proposals from data labeled with a set of object regions and their corresponding categories. However, this approach often struggles with novel objects in the open world that are absent in the training set. In this paper, we identify that the problem is that the binary classifiers in existing proposal methods tend to overfit to the training categories. Therefore, we propose a classification-free Object Localization Network (OLN) which estimates the objectness of each region purely by how well the location and shape of a region overlap with any ground-truth object (e.g., centerness and IoU). This simple strategy learns generalizable objectness and outperforms existing proposals on cross-category generalization on COCO, as well as cross-dataset evaluation on RoboNet, Object365, and EpicKitchens. Finally, we demonstrate the merit of OLN for long-tail object detection on large vocabulary dataset, LVIS, where we notice clear improvement in rare and common categories.
One issue with computer based histopathology image analysis is that the size of the raw image is usually very large. Taking the raw image as input to the deep learning model would be computationally expensive while resizing the raw image to low resolution would incur information loss. In this paper, we present a novel deep hybrid attention approach to breast cancer classification. It first adaptively selects a sequence of coarse regions from the raw image by a hard visual attention algorithm, and then for each such region it is able to investigate the abnormal parts based on a soft-attention mechanism. A recurrent network is then built to make decisions to classify the image region and also to predict the location of the image region to be investigated at the next time step. As the region selection process is non-differentiable, we optimize the whole network through a reinforcement approach to learn an optimal policy to classify the regions. Based on this novel Look, Investigate and Classify approach, we only need to process a fraction of the pixels in the raw image resulting in significant saving in computational resources without sacrificing performances. Our approach is evaluated on a public breast cancer histopathology database, where it demonstrates superior performance to the state-of-the-art deep learning approaches, achieving around 96% classification accuracy while only 15% of raw pixels are used.
Arctic environments are rapidly changing under the warming climate. Of particular interest are wetlands, a type of ecosystem that constitutes the most effective terrestrial long-term carbon store. As permafrost thaws, the carbon that was locked in these wetland soils for millennia becomes available for aerobic and anaerobic decomposition, which releases CO2 and CH4, respectively, back to the atmosphere.As CO2 and CH4 are potent greenhouse gases, this transfer of carbon from the land to the atmosphere further contributes to global warming, thereby increasing the rate of permafrost degradation in a positive feedback loop. Therefore, monitoring Arctic wetland health and dynamics is a key scientific task that is also of importance for policy. However, the identification and delineation of these important wetland ecosystems, remain incomplete and often inaccurate. Mapping the extent of Arctic wetlands remains a challenge for the scientific community. Conventional, coarser remote sensing methods are inadequate at distinguishing the diverse and micro-topographically complex non-vascular vegetation that characterize Arctic wetlands, presenting the need for better identification methods. To tackle this challenging problem, we constructed and annotated the first-of-its-kind Arctic Wetland Dataset (AWD). Based on that, we present ArcticNet, a deep neural network that exploits the multi-spectral, high-resolution imagery captured from nanosatellites (Planet Dove CubeSats) with additional DEM from the ArcticDEM project, to semantically label a Arctic study area into six types, in which three Arctic wetland functional types are included. We present multi-fold efforts to handle the arising challenges, including class imbalance, and the choice of fusion strategies. Preliminary results endorse the high promise of ArcticNet, achieving 93.12% in labelling a hold-out set of regions in our Arctic study area.
Plankton are effective indicators of environmental change and ecosystem health in freshwater habitats, but collection of plankton data using manual microscopic methods is extremely labor-intensive and expensive. Automated plankton imaging offers a promising way forward to monitor plankton communities with high frequency and accuracy in real-time. Yet, manual annotation of millions of images proposes a serious challenge to taxonomists. Deep learning classifiers have been successfully applied in various fields and provided encouraging results when used to categorize marine plankton images. Here, we present a set of deep learning models developed for the identification of lake plankton, and study several strategies to obtain optimal performances,which lead to operational prescriptions for users. To this aim, we annotated into 35 classes over 17900 images of zooplankton and large phytoplankton colonies, detected in Lake Greifensee (Switzerland) with the Dual Scripps Plankton Camera. Our best models were based on transfer learning and ensembling, which classified plankton images with 98% accuracy and 93% F1 score. When tested on freely available plankton datasets produced by other automated imaging tools (ZooScan, FlowCytobot and ISIIS), our models performed better than previously used models. Our annotated data, code and classification models are freely available online.

suggested questions

comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا