No Arabic abstract
The choice of appropriate measures of deprivation, identification and aggregation of poverty has been a challenge for many years. The works of Sen, Atkinson and others have been the cornerstone for most of the literature on poverty measuring. Recent contributions have focused in what we now know as multidimensional poverty measuring. Current aggregation and identification measures for multidimensional poverty make the implicit assumption that dimensions are independent of each other, thus ignoring the natural dependence between them. In this article a variant of the usual method of deprivation measuring is presented. It allows the existence of the forementioned connections by drawing from geometric and networking notions. This new methodology relies on previous identification and aggregation methods, but with small modifications to prevent arbitrary manipulations. It is also proved that this measure still complies with the axiomatic framework of its predecessor. Moreover, the general form of latter can be considered a particular case of this new measure, although this identification is not unique.
The long-lasting socio-economic impact of the global financial crisis has questioned the adequacy of traditional tools in explaining periods of financial distress, as well as the adequacy of the existing policy response. In particular, the effect of complex interconnections among financial institutions on financial stability has been widely recognized. A recent debate focused on the effects of unconventional policies aimed at achieving both price and financial stability. In particular, Quantitative Easing (QE, i.e., the large-scale asset purchase programme conducted by a central bank upon the creation of new money) has been recently implemented by the European Central Bank (ECB). In this context, two questions deserve more attention in the literature. First, to what extent, by injecting liquidity, the QE may alter the bank-firm lending level and stimulate the real economy. Second, to what extent the QE may also alter the pattern of intra-financial exposures among financial actors (including banks, investment funds, insurance corporations, and pension funds) and what are the implications in terms of financial stability. Here, we address these two questions by developing a methodology to map the macro-network of financial exposures among institutional sectors across financial instruments (e.g., equity, bonds, and loans) and we illustrate our approach on recently available data (i.e., data on loans and private and public securities purchased within the QE). We then test the effect of the implementation of ECBs QE on the time evolution of the financial linkages in the macro-network of the euro area, as well as the effect on macroeconomic variables, such as output and prices.
Productivity levels and growth are extremely heterogeneous among firms. A vast literature has developed to explain the origins of productivity shocks, their dispersion, evolution and their relationship to the business cycle. We examine in detail the distribution of labor productivity levels and growth, and observe that they exhibit heavy tails. We propose to model these distributions using the four parameter L{e}vy stable distribution, a natural candidate deriving from the generalised Central Limit Theorem. We show that it is a better fit than several standard alternatives, and is remarkably consistent over time, countries and sectors. In all samples considered, the tail parameter is such that the theoretical variance of the distribution is infinite, so that the sample standard deviation increases with sample size. We find a consistent positive skewness, a markedly different behaviour between the left and right tails, and a positive relationship between productivity and size. The distributional approach allows us to test different measures of dispersion and find that productivity dispersion has slightly decreased over the past decade.
We develop an agent-based simulation of the catastrophe insurance and reinsurance industry and use it to study the problem of risk model homogeneity. The model simulates the balance sheets of insurance firms, who collect premiums from clients in return for ensuring them against intermittent, heavy-tailed risks. Firms manage their capital and pay dividends to their investors, and use either reinsurance contracts or cat bonds to hedge their tail risk. The model generates plausible time series of profits and losses and recovers stylized facts, such as the insurance cycle and the emergence of asymmetric, long tailed firm size distributions. We use the model to investigate the problem of risk model homogeneity. Under Solvency II, insurance companies are required to use only certified risk models. This has led to a situation in which only a few firms provide risk models, creating a systemic fragility to the errors in these models. We demonstrate that using too few models increases the risk of nonpayment and default while lowering profits for the industry as a whole. The presence of the reinsurance industry ameliorates the problem but does not remove it. Our results suggest that it would be valuable for regulators to incentivize model diversity. The framework we develop here provides a first step toward a simulation model of the insurance industry for testing policies and strategies for better capital management.
One important dimension of Conditional Cash Transfer Programs apart from conditionality is the provision of continuous frequency of payouts. On the contrary, the Apni Beti Apna Dhan program, implemented in the state of Haryana in India from 1994 to 1998 offers a promised amount to female beneficiaries redeemable only after attaining 18 years of age if she remains unmarried. This paper assesses the impact of this long-term financial incentivization on outcomes, not directly associated with the conditionality. Using multiple datasets in a triple difference framework, the findings reveal a significant positive impact on years of education though it does not translate into gains in labor participation. While gauging the potential channels, we did not observe higher educational effects beyond secondary education. Additionally, impact on time allocation for leisure, socialization or self-care, age of marriage beyond 18 years, age at first birth, and post-marital empowerment indicators are found to be limited. These evidence indicate failure of the program in altering the prevailing gender norms despite improvements in educational outcomes. The paper recommends a set of complementary potential policy instruments that include altering gender norms through behavioral interventions skill development and incentives to encourage female work participation.
In this paper, we estimate the causal effect of political power on the provision of public education. We use data from a historical nondemocratic society with a weighted voting system where eligible voters received votes in proportion to their taxable income and without any limit on the maximum of votes, i.e., the political system used in Swedish local governments during the period 1862-1909. We use a novel identification strategy where we combine two different identification strategies, i.e., a threshold regression analysis and a generalized event-study design, both of which exploit nonlinearities or discontinuities in the effect of political power between two opposing local elites: agricultural landowners and emerging industrialists. The results suggest that school spending is approximately 90-120% higher if the nonagrarian interest controls all of the votes compared to when landowners have more than a majority of votes. Moreover, we find no evidence that the concentration of landownership affected this relationship