ترغب بنشر مسار تعليمي؟ اضغط هنا

Latent Estimation of GDP, GDP per capita, and Population from Historic and Contemporary Sources

111   0   0.0 ( 0 )
 نشر من قبل Christopher Fariss
 تاريخ النشر 2017
  مجال البحث الاحصاء الرياضي
والبحث باللغة English




اسأل ChatGPT حول البحث

The concepts of Gross Domestic Product (GDP), GDP per capita, and population are central to the study of political science and economics. However, a growing literature suggests that existing measures of these concepts contain considerable error or are based on overly simplistic modeling choices. We address these problems by creating a dynamic, three-dimensional latent trait model, which uses observed information about GDP, GDP per capita, and population to estimate posterior prediction intervals for each of these important concepts. By combining historical and contemporary sources of information, we are able to extend the temporal and spatial coverage of existing datasets for country-year units back to 1500 A.D through 2015 A.D. and, because the model makes use of multiple indicators of the underlying concepts, we are able to estimate the relative precision of the different country-year estimates. Overall, our latent variable model offers a principled method for incorporating information from different historic and contemporary data sources. It can be expanded or refined as researchers discover new or alternative sources of information about these concepts.



قيم البحث

اقرأ أيضاً

Urban scaling and Zipfs law are two fundamental paradigms for the science of cities. These laws have mostly been investigated independently and are often perceived as disassociated matters. Here we present a large scale investigation about the connec tion between these two laws using population and GDP data from almost five thousand consistently-defined cities in 96 countries. We empirically demonstrate that both laws are tied to each other and derive an expression relating the urban scaling and Zipf exponents. This expression captures the average tendency of the empirical relation between both exponents, and simulations yield very similar results to the real data after accounting for random variations. We find that while the vast majority of countries exhibit increasing returns to scale of urban GDP, this effect is less pronounced in countries with fewer small cities and more metropolises (small Zipf exponent) than in countries with a more uneven number of small and large cities (large Zipf exponent). Our research puts forward the idea that urban scaling does not solely emerge from intra-city processes, as population distribution and scaling of urban GDP are correlated to each other.
Runtime and scalability of large neural networks can be significantly affected by the placement of operations in their dataflow graphs on suitable devices. With increasingly complex neural network architectures and heterogeneous device characteristic s, finding a reasonable placement is extremely challenging even for domain experts. Most existing automated device placement approaches are impractical due to the significant amount of compute required and their inability to generalize to new, previously held-out graphs. To address both limitations, we propose an efficient end-to-end method based on a scalable sequential attention mechanism over a graph neural network that is transferable to new graphs. On a diverse set of representative deep learning models, including Inception-v3, AmoebaNet, Transformer-XL, and WaveNet, our method on average achieves 16% improvement over human experts and 9.2% improvement over the prior art with 15 times faster convergence. To further reduce the computation cost, we pre-train the policy network on a set of dataflow graphs and use a superposition network to fine-tune it on each individual graph, achieving state-of-the-art performance on large hold-out graphs with over 50k nodes, such as an 8-layer GNMT.
In this paper we describe an algorithm for predicting the websites at risk in a long range hacking activity, while jointly inferring the provenance and evolution of vulnerabilities on websites over continuous time. Specifically, we use hazard regress ion with a time-varying additive hazard function parameterized in a generalized linear form. The activation coefficients on each feature are continuous-time functions constrained with total variation penalty inspired by hacking campaigns. We show that the optimal solution is a 0th order spline with a finite number of adaptively chosen knots, and can be solved efficiently. Experiments on real data show that our method significantly outperforms classic methods while providing meaningful interpretability.
93 - Yi Guo , Huan Yuan , Jianchao Tan 2021
Model compression techniques are recently gaining explosive attention for obtaining efficient AI models for various real-time applications. Channel pruning is one important compression strategy and is widely used in slimming various DNNs. Previous ga te-based or importance-based pruning methods aim to remove channels whose importance is smallest. However, it remains unclear what criteria the channel importance should be measured on, leading to various channel selection heuristics. Some other sampling-based pruning methods deploy sampling strategies to train sub-nets, which often causes the training instability and the compressed models degraded performance. In view of the research gaps, we present a new module named Gates with Differentiable Polarization (GDP), inspired by principled optimization ideas. GDP can be plugged before convolutional layers without bells and whistles, to control the on-and-off of each channel or whole layer block. During the training process, the polarization effect will drive a subset of gates to smoothly decrease to exact zero, while other gates gradually stay away from zero by a large margin. When training terminates, those zero-gated channels can be painlessly removed, while other non-zero gates can be absorbed into the succeeding convolution kernel, causing completely no interruption to training nor damage to the trained model. Experiments conducted over CIFAR-10 and ImageNet datasets show that the proposed GDP algorithm achieves the state-of-the-art performance on various benchmark DNNs at a broad range of pruning ratios. We also apply GDP to DeepLabV3Plus-ResNet50 on the challenging Pascal VOC segmentation task, whose test performance sees no drop (even slightly improved) with over 60% FLOPs saving.
We solve an infinite time-horizon bounded-variation stochastic control problem with regime switching between $N$ states. This is motivated by the problem of a government that wants to control the countrys debt-to-GDP (gross domestic product) ratio. I n our formulation, the debt-to-GDP ratio evolves stochastically in continuous time, and its drift -- given by the interest rate on government debt, net of the growth rate of GDP -- is affected by an exogenous macroeconomic risk process modelled by a continuous-time Markov chain with $N$ states. The government can act on the public debt by increasing or decreasing its level, and it aims at minimising a net expected regime-dependent cost functional. Without relying on a guess-and-verify approach, but performing a direct probabilistic study, we show that it is optimal to keep the debt-to-GDP ratio in an interval, whose boundaries depend on the states of the risk process. These boundaries are given through a zero-sum optimal stopping game with regime switching with $N$ states and are characterised through a system of nonlinear algebraic equations with constraints. To the best of our knowledge, such a result appears here for the first time. Finally, we put in practice our methodology in a case study of a Markov chain with $N=2$ states; we provide a thorough analysis and we complement our theoretical results by a detailed numerical study on the sensitivity of the optimal debt ratio management policy with respect to the problems parameters.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا