ترغب بنشر مسار تعليمي؟ اضغط هنا

Recurrent U-net: Deep learning to predict daily summertime ozone in the United States

61   0   0.0 ( 0 )
 نشر من قبل Tai-Long He
 تاريخ النشر 2019
والبحث باللغة English




اسأل ChatGPT حول البحث

We use a hybrid deep learning model to predict June-July-August (JJA) daily maximum 8-h average (MDA8) surface ozone concentrations in the US. A set of meteorological fields from the ERA-Interim reanalysis as well as monthly mean NO$_x$ emissions from the Community Emissions Data System (CEDS) inventory are selected as predictors. Ozone measurements from the US Environmental Protection Agency (EPA) Air Quality System (AQS) from 1980 to 2009 are used to train the model, whereas data from 2010 to 2014 are used to evaluate the performance of the model. The model captures well daily, seasonal and interannual variability in MDA8 ozone across the US. Feature maps show that the model captures teleconnections between MDA8 ozone and the meteorological fields, which are responsible for driving the ozone dynamics. We used the model to evaluate recent trends in NO$_x$ emissions in the US and found that the trend in the EPA emission inventory produced the largest negative bias in MDA8 ozone between 2010-2016. The top-down emission trends from the Tropospheric Chemistry Reanalysis (TCR-2), which is based on satellite observations, produced predictions in best agreement with observations. In urban regions, the trend in AQS NO$_2$ observations provided ozone predictions in agreement with observations, whereas in rural regions the satellite-derived trends produced the best agreement. In both rural and urban regions the EPA trend resulted in the largest negative bias in predicted ozone. Our results suggest that the EPA inventory is overestimating the reductions in NO$_x$ emissions and that the satellite-derived trend reflects the influence of reductions in NO$_x$ emissions as well as changes in background NO$_x$. Our results demonstrate the significantly greater predictive capability that the deep learning model provides over conventional atmospheric chemical transport models for air quality analyses.



قيم البحث

اقرأ أيضاً

The Medico: Multimedia Task 2020 focuses on developing an efficient and accurate computer-aided diagnosis system for automatic segmentation [3]. We participate in task 1, Polyps segmentation task, which is to develop algorithms for segmenting polyps on a comprehensive dataset. In this task, we propose methods combining Residual module, Inception module, Adaptive Convolutional neural network with U-Net model, and PraNet for semantic segmentation of various types of polyps in endoscopic images. We select 5 runs with different architecture and parameters in our methods. Our methods show potential results in accuracy and efficiency through multiple experiments, and our team is in the Top 3 best results with a Jaccard index of 0.765.
One of the most pressing questions in climate science is that of the effect of anthropogenic aerosol on the Earths energy balance. Aerosols provide the `seeds on which cloud droplets form, and changes in the amount of aerosol available to a cloud can change its brightness and other physical properties such as optical thickness and spatial extent. Clouds play a critical role in moderating global temperatures and small perturbations can lead to significant amounts of cooling or warming. Uncertainty in this effect is so large it is not currently known if it is negligible, or provides a large enough cooling to largely negate present-day warming by CO2. This work uses deep convolutional neural networks to look for two particular perturbations in clouds due to anthropogenic aerosol and assess their properties and prevalence, providing valuable insights into their climatic effects.
Humans excel at detecting interesting patterns in images, for example those taken from satellites. This kind of anecdotal evidence can lead to the discovery of new phenomena. However, it is often difficult to gather enough data of subjective features for significant analysis. This paper presents an example of how two tools that have recently become accessible to a wide range of researchers, crowd-sourcing and deep learning, can be combined to explore satellite imagery at scale. In particular, the focus is on the organization of shallow cumulus convection in the trade wind regions. Shallow clouds play a large role in the Earths radiation balance yet are poorly represented in climate models. For this project four subjective patterns of organization were defined: Sugar, Flower, Fish and Gravel. On cloud labeling days at two institutes, 67 scientists screened 10,000 satellite images on a crowd-sourcing platform and classified almost 50,000 mesoscale cloud clusters. This dataset is then used as a training dataset for deep learning algorithms that make it possible to automate the pattern detection and create global climatologies of the four patterns. Analysis of the geographical distribution and large-scale environmental conditions indicates that the four patterns have some overlap with established modes of organization, such as open and closed cellular convection, but also differ in important ways. The results and dataset from this project suggests promising research questions. Further, this study illustrates that crowd-sourcing and deep learning complement each other well for the exploration of image datasets.
Gridded data products, for example interpolated daily measurements of precipitation from weather stations, are commonly used as a convenient substitute for direct observations because these products provide a spatially and temporally continuous and c omplete source of data. However, when the goal is to characterize climatological features of extreme precipitation over a spatial domain (e.g., a map of return values) at the native spatial scales of these phenomena, then gridded products may lead to incorrect conclusions because daily precipitation is a fractal field and hence any smoothing technique will dampen local extremes. To address this issue, we create a new probabilistic gridded product specifically designed to characterize the climatological properties of extreme precipitation by applying spatial statistical analyses to daily measurements of precipitation from the GHCN over CONUS. The essence of our method is to first estimate the climatology of extreme precipitation based on station data and then use a data-driven statistical approach to interpolate these estimates to a fine grid. We argue that our method yields an improved characterization of the climatology within a grid cell because the probabilistic behavior of extreme precipitation is much better behaved (i.e., smoother) than daily weather. Furthermore, the spatial smoothing innate to our approach significantly increases the signal-to-noise ratio in the estimated extreme statistics relative to an analysis without smoothing. Finally, by deriving a data-driven approach for translating extreme statistics to a spatially complete grid, the methodology outlined in this paper resolves the issue of how to properly compare station data with output from earth system models. We conclude the paper by comparing our probabilistic gridded product with a standard extreme value analysis of the Livneh gridded daily precipitation product.
Sturge-Weber syndrome (SWS) is a vascular malformation disease, and it may cause blindness if the patients condition is severe. Clinical results show that SWS can be divided into two types based on the characteristics of scleral blood vessels. Theref ore, how to accurately segment scleral blood vessels has become a significant problem in computer-aided diagnosis. In this research, we propose to continuously upsample the bottom layers feature maps to preserve image details, and design a novel Claw UNet based on UNet for scleral blood vessel segmentation. Specifically, the residual structure is used to increase the number of network layers in the feature extraction stage to learn deeper features. In the decoding stage, by fusing the features of the encoding, upsampling, and decoding parts, Claw UNet can achieve effective segmentation in the fine-grained regions of scleral blood vessels. To effectively extract small blood vessels, we use the attention mechanism to calculate the attention coefficient of each position in images. Claw UNet outperforms other UNet-based networks on scleral blood vessel image dataset.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا