No Arabic abstract
Vector-based cellular automata (CA) based on real land-parcel has become an important trend in current urban development simulation studies. Compared with raster-based and parcel-based CA models, vector CA models are difficult to be widely used because of their complex data structures and technical difficulties. The UrbanVCA, a brand-new vector CA-based urban development simulation framework was proposed in this study, which supports multiple machine-learning models. To measure the simulation accuracy better, this study also first proposes a vector-based landscape index (VecLI) model based on the real land-parcels. Using Shunde, Guangdong as the study area, the UrbanVCA simulates multiple types of urban land-use changes at the land-parcel level have achieved a high accuracy (FoM=0.243) and the landscape index similarity reaches 87.3%. The simulation results in 2030 show that the eco-protection scenario can promote urban agglomeration and reduce ecological aggression and loss of arable land by at least 60%. Besides, we have developed and released UrbanVCA software for urban planners and researchers.
Cellular Automata (CA) are widely used to model the dynamics within complex land use and land cover (LULC) systems. Past CA model research has focused on improving the technical modeling procedures, and only a few studies have sought to improve our understanding of the nonlinear relationships that underlie LULC change. Many CA models lack the ability to simulate the detailed patch evolution of multiple land use types. This study introduces a patch-generating land use simulation (PLUS) model that integrates a land expansion analysis strategy and a CA model based on multi-type random patch seeds. These were used to understand the drivers of land expansion and to investigate the landscape dynamics in Wuhan, China. The proposed model achieved a higher simulation accuracy and more similar landscape pattern metrics to the true landscape than other CA models tested. The land expansion analysis strategy also uncovered some underlying transition rules, such as that grassland is most likely to be found where it is not strongly impacted by human activities, and that deciduous forest areas tend to grow adjacent to arterial roads. We also projected the structure of land use under different optimizing scenarios for 2035 by combining the proposed model with multi-objective programming. The results indicate that the proposed model can help policymakers to manage future land use dynamics and so to realize more sustainable land use patterns for future development. Software for PLUS has been made available at https://github.com/HPSCIL/Patch-generating_Land_Use_Simulation_Model
Desktop GIS applications, such as ArcGIS and QGIS, provide tools essential for conducting suitability analysis, an activity that is central in formulating a land-use plan. But, when it comes to building complicated land-use suitability models, these applications have several limitations, including operating system-dependence, lack of dedicated modules, insufficient reproducibility, and difficult, if not impossible, deployment on a computing cluster. To address the challenges, this paper introduces PyLUSAT: Python for Land Use Suitability Analysis Tools. PyLUSAT is an open-source software package that provides a series of tools (functions) to conduct various tasks in a suitability modeling workflow. These tools were evaluated against comparable tools in ArcMap 10.4 with respect to both accuracy and computational efficiency. Results showed that PyLUSAT functions were two to ten times more efficient depending on the jobs complexity, while generating outputs with similar accuracy compared to the ArcMap tools. PyLUSAT also features extensibility and cross-platform compatibility. It has been used to develop fourteen QGIS Processing Algorithms and implemented on a high-performance computational cluster (HiPerGator at the University of Florida) to expedite the process of suitability analysis. All these properties make PyLUSAT a competitive alternative solution for urban planners/researchers to customize and automate suitability analysis as well as integrate the technique into a larger analytical framework.
This paper studies three classes of cellular automata from a computational point of view: freezing cellular automata where the state of a cell can only decrease according to some order on states, cellular automata where each cell only makes a bounded number of state changes in any orbit, and finally cellular automata where each orbit converges to some fixed point. Many examples studied in the literature fit into these definitions, in particular the works on cristal growth started by S. Ulam in the 60s. The central question addressed here is how the computational power and computational hardness of basic properties is affected by the constraints of convergence, bounded number of change, or local decreasing of states in each cell. By studying various benchmark problems (short-term prediction, long term reachability, limits) and considering various complexity measures and scales (LOGSPACE vs. PTIME, communication complexity, Turing computability and arithmetical hierarchy) we give a rich and nuanced answer: the overall computational complexity of such cellular automata depends on the class considered (among the three above), the dimension, and the precise problem studied. In particular, we show that all settings can achieve universality in the sense of Blondel-Delvenne-Kr{u}rka, although short term predictability varies from NLOGSPACE to P-complete. Besides, the computability of limit configurations starting from computable initial configurations separates bounded-change from convergent cellular automata in dimension 1, but also dimension 1 versus higher dimensions for freezing cellular automata. Another surprising dimension-sensitive result obtained is that nilpotency becomes decidable in dimension 1 for all the three classes, while it stays undecidable even for freezing cellular automata in higher dimension.
Nitrogen dioxide (NO$_2$) is a primary constituent of traffic-related air pollution and has well established harmful environmental and human-health impacts. Knowledge of the spatiotemporal distribution of NO$_2$ is critical for exposure and risk assessment. A common approach for assessing air pollution exposure is linear regression involving spatially referenced covariates, known as land-use regression (LUR). We develop a scalable approach for simultaneous variable selection and estimation of LUR models with spatiotemporally correlated errors, by combining a general-Vecchia Gaussian process approximation with a penalty on the LUR coefficients. In comparisons to existing methods using simulated data, our approach resulted in higher model-selection specificity and sensitivity and in better prediction in terms of calibration and sharpness, for a wide range of relevant settings. In our spatiotemporal analysis of daily, US-wide, ground-level NO$_2$ data, our approach was more accurate, and produced a sparser and more interpretable model. Our daily predictions elucidate spatiotemporal patterns of NO$_2$ concentrations across the United States, including significant variations between cities and intra-urban variation. Thus, our predictions will be useful for epidemiological and risk-assessment studies seeking daily, national-scale predictions, and they can be used in acute-outcome health-risk assessments.
In this paper we propose the use of multiple local binary patterns(LBPs) to effectively classify land use images. We use the UC Merced 21 class land use image dataset. Task is challenging for classification as the dataset contains intra class variability and inter class similarities. Our proposed method of using multi-neighborhood LBPs combined with nearest neighbor classifier is able to achieve an accuracy of 77.76%. Further class wise analysis is conducted and suitable suggestion are made for further improvements to classification accuracy.