ترغب بنشر مسار تعليمي؟ اضغط هنا

The COVID-19 pandemic represents the most significant public health disaster since the 1918 influenza pandemic. During pandemics such as COVID-19, timely and reliable spatio-temporal forecasting of epidemic dynamics is crucial. Deep learning-based ti me series models for forecasting have recently gained popularity and have been successfully used for epidemic forecasting. Here we focus on the design and analysis of deep learning-based models for COVID-19 forecasting. We implement multiple recurrent neural network-based deep learning models and combine them using the stacking ensemble technique. In order to incorporate the effects of multiple factors in COVID-19 spread, we consider multiple sources such as COVID-19 confirmed and death case count data and testing data for better predictions. To overcome the sparsity of training data and to address the dynamic correlation of the disease, we propose clustering-based training for high-resolution forecasting. The methods help us to identify the similar trends of certain groups of regions due to various spatio-temporal effects. We examine the proposed method for forecasting weekly COVID-19 new confirmed cases at county-, state-, and country-level. A comprehensive comparison between different time series models in COVID-19 context is conducted and analyzed. The results show that simple deep learning models can achieve comparable or better performance when compared with more complicated models. We are currently integrating our methods as a part of our weekly forecasts that we provide state and federal authorities.
Some of the key questions of interest during the COVID-19 pandemic (and all outbreaks) include: where did the disease start, how is it spreading, who is at risk, and how to control the spread. There are a large number of complex factors driving the s pread of pandemics, and, as a result, multiple modeling techniques play an increasingly important role in shaping public policy and decision making. As different countries and regions go through phases of the pandemic, the questions and data availability also changes. Especially of interest is aligning model development and data collection to support response efforts at each stage of the pandemic. The COVID-19 pandemic has been unprecedented in terms of real-time collection and dissemination of a number of diverse datasets, ranging from disease outcomes, to mobility, behaviors, and socio-economic factors. The data sets have been critical from the perspective of disease modeling and analytics to support policymakers in real-time. In this overview article, we survey the data landscape around COVID-19, with a focus on how such datasets have aided modeling and response through different stages so far in the pandemic. We also discuss some of the current challenges and the needs that will arise as we plan our way out of the pandemic.
COVID-19 pandemic represents an unprecedented global health crisis in the last 100 years. Its economic, social and health impact continues to grow and is likely to end up as one of the worst global disasters since the 1918 pandemic and the World Wars . Mathematical models have played an important role in the ongoing crisis; they have been used to inform public policies and have been instrumental in many of the social distancing measures that were instituted worldwide. In this article we review some of the important mathematical models used to support the ongoing planning and response efforts. These models differ in their use, their mathematical form and their scope.
Sparse blind deconvolution is the problem of estimating the blur kernel and sparse excitation, both of which are unknown. Considering a linear convolution model, as opposed to the standard circular convolution model, we derive a sufficient condition for stable deconvolution. The columns of the linear convolution matrix form a Riesz basis with the tightness of the Riesz bounds determined by the autocorrelation of the blur kernel. Employing a Bayesian framework results in a non-convex, non-smooth cost function consisting of an $ell_2$ data-fidelity term and a sparsity promoting $ell_p$-norm ($0 le p le 1$) regularizer. Since the $ell_p$-norm is not differentiable at the origin, we employ an $epsilon$-regularized $ell_p$-norm as a surrogate. The data term is also non-convex in both the blur kernel and excitation. An iterative scheme termed alternating minimization (Alt. Min.) $ell_p-ell_2$ projections algorithm (ALPA) is developed for optimization of the $epsilon$-regularized cost function. Further, we demonstrate that, in every iteration, the $epsilon$-regularized cost function is non-increasing and more importantly, bounds the original $ell_p$-norm-based cost. Due to non-convexity of the cost, the accuracy of estimation is largely influenced by the initialization. Considering regularized least-squares estimate as the initialization, we analyze how the initialization errors are concentrated, first in Gaussian noise, and then in bounded noise, the latter case resulting in tighter bounds. Comparisons with state-of-the-art blind deconvolution algorithms show that the deconvolution accuracy is higher in case of ALPA. In the context of natural speech signals, ALPA results in accurate deconvolution of a voiced speech segment into a sparse excitation and smooth vocal tract response.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا