Do you want to publish a course? Click here

From Static to Dynamic Prediction: Wildfire Risk Assessment Based on Multiple Environmental Factors

157   0   0.0 ( 0 )
 Added by Hanjia Lyu
 Publication date 2021
and research's language is English




Ask ChatGPT about the research

Wildfire is one of the biggest disasters that frequently occurs on the west coast of the United States. Many efforts have been made to understand the causes of the increases in wildfire intensity and frequency in recent years. In this work, we propose static and dynamic prediction models to analyze and assess the areas with high wildfire risks in California by utilizing a multitude of environmental data including population density, Normalized Difference Vegetation Index (NDVI), Palmer Drought Severity Index (PDSI), tree mortality area, tree mortality number, and altitude. Moreover, we focus on a better understanding of the impacts of different factors so as to inform preventive actions. To validate our models and findings, we divide the land of California into 4,242 grids of 0.1 degrees $times$ 0.1 degrees in latitude and longitude, and compute the risk of each grid based on spatial and temporal conditions. To verify the generalizability of our models, we further expand the scope of wildfire risk assessment from California to Washington without any fine tuning. By performing counterfactual analysis, we uncover the effects of several possible methods on reducing the number of high risk wildfires. Taken together, our study has the potential to estimate, monitor, and reduce the risks of wildfires across diverse areas provided that such environment data is available.



rate research

Read More

321 - Jinpeng Li , Yaling Tao , Ting Cai 2021
We exploit liver cancer prediction model using machine learning algorithms based on epidemiological data of over 55 thousand peoples from 2014 to the present. The best performance is an AUC of 0.71. We analyzed model parameters to investigate critical risk factors that contribute the most to prediction.
98 - Jacob Abernethy 2016
Recovery from the Flint Water Crisis has been hindered by uncertainty in both the water testing process and the causes of contamination. In this work, we develop an ensemble of predictive models to assess the risk of lead contamination in individual homes and neighborhoods. To train these models, we utilize a wide range of data sources, including voluntary residential water tests, historical records, and city infrastructure data. Additionally, we use our models to identify the most prominent factors that contribute to a high risk of lead contamination. In this analysis, we find that lead service lines are not the only factor that is predictive of the risk of lead contamination of water. These results could be used to guide the long-term recovery efforts in Flint, minimize the immediate damages, and improve resource-allocation decisions for similar water infrastructure crises.
Nanotechnology is a so-called key-emerging technology that opens a new world of technological innovation. The novelty of engineered nanomaterials (ENMs) raises concern over their possible adverse effect to man and the environment. Thereupon, risk assessors are challenged with ever decreasing times-to-market of nano-enabled products. Combined with the perception that it is impossible to extensively test all new nanoforms, there is growing awareness that alternative assessment approaches need to be developed and validated to enable efficient and transparent risk assessment of ENMs. Associated with this awareness, there is the need to use existing data on similar ENMs as efficiently as possible, which highlights the need of developing alternative approaches to fate and hazard assessment like predictive modelling, grouping of ENMs, and read across of data towards similar ENMs. In this contribution, an overview is given of the current state of the art with regard to categorization of ENMs and the perspectives for implementation in future risk assessment. It is concluded that the qualitative approaches to grouping and categorization that have already been developed are to be substantiated, and additional quantification of the current sets of rules-of-thumb based approaches is a key priority for the near future. Most of all, the key question of what actually drives the fate and effects of (complex) particles is yet to be answered in enough detail, with a key role foreseen for the surface reactivity of particles as modulated by the chemical composition of the inner and outer core of particles. When it comes to environmental categorization of ENMs we currently are in a descriptive rather than in a predictive mode.
We study the problem of maximizing a non-monotone submodular function under multiple knapsack constraints. We propose a simple discrete greedy algorithm to approach this problem, and prove that it yields strong approximation guarantees for functions with bounded curvature. In contrast to other heuristics, this requires no problem relaxation to continuous domains and it maintains a constant-factor approximation guarantee in the problem size. In the case of a single knapsack, our analysis suggests that the standard greedy can be used in non-monotone settings. Additionally, we study this problem in a dynamic setting, by which knapsacks change during the optimization process. We modify our greedy algorithm to avoid a complete restart at each constraint update. This modification retains the approximation guarantees of the static case. We evaluate our results experimentally on a video summarization and sensor placement task. We show that our proposed algorithm competes with the state-of-the-art in static settings. Furthermore, we show that in dynamic settings with tight computational time budget, our modified greedy yields significant improvements over starting the greedy from scratch, in terms of the solution quality achieved.
Successful health risk prediction demands accuracy and reliability of the model. Existing predictive models mainly depend on mining electronic health records (EHR) with advanced deep learning techniques to improve model accuracy. However, they all ignore the importance of publicly available online health data, especially socioeconomic status, environmental factors, and detailed demographic information for each location, which are all strong predictive signals and can definitely augment precision medicine. To achieve model reliability, the model needs to provide accurate prediction and uncertainty score of the prediction. However, existing uncertainty estimation approaches often failed in handling high-dimensional data, which are present in multi-sourced data. To fill the gap, we propose UNcertaInTy-based hEalth risk prediction (UNITE) model. Building upon an adaptive multimodal deep kernel and a stochastic variational inference module, UNITE provides accurate disease risk prediction and uncertainty estimation leveraging multi-sourced health data including EHR data, patient demographics, and public health data collected from the web. We evaluate UNITE on real-world disease risk prediction tasks: nonalcoholic fatty liver disease (NASH) and Alzheimers disease (AD). UNITE achieves up to 0.841 in F1 score for AD detection, up to 0.609 in PR-AUC for NASH detection, and outperforms various state-of-the-art baselines by up to $19%$ over the best baseline. We also show UNITE can model meaningful uncertainties and can provide evidence-based clinical support by clustering similar patients.

suggested questions

comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا