No Arabic abstract
We study the consequences of job markets heavy reliance on referrals. Referrals screen candidates and lead to better matches and increased productivity, but disadvantage job-seekers who have few or no connections to employed workers, leading to increased inequality. Coupled with homophily, referrals also lead to immobility: a demographic groups low current employment rate leads that group to have relatively low future employment as well. We identify conditions under which distributing referrals more evenly across a population not only reduces inequality, but also improves future productivity and economic mobility. We use the model to examine optimal policies, showing that one-time affirmative action policies involve short-run production losses, but lead to long-term improvements in equality, mobility, and productivity due to induced changes in future referrals. We also examine how the possibility of firing workers changes the effects of referrals.
Skill shortages are a drain on society. They hamper economic opportunities for individuals, slow growth for firms, and impede labor productivity in aggregate. Therefore, the ability to understand and predict skill shortages in advance is critical for policy-makers and educators to help alleviate their adverse effects. This research implements a high-performing Machine Learning approach to predict occupational skill shortages. In addition, we demonstrate methods to analyze the underlying skill demands of occupations in shortage and the most important features for predicting skill shortages. For this work, we compile a unique dataset of both Labor Demand and Labor Supply occupational data in Australia from 2012 to 2018. This includes data from 7.7 million job advertisements (ads) and 20 official labor force measures. We use these data as explanatory variables and leverage the XGBoost classifier to predict yearly skills shortage classifications for 132 standardized occupations. The models we construct achieve macro-F1 average performance scores of up to 83 per cent. Our results show that job ads data and employment statistics were the highest performing feature sets for predicting year-to-year skills shortage changes for occupations. We also find that features such as Hours Worked, years of Education, years of Experience, and median Salary are highly important features for predicting occupational skill shortages. This research provides a robust data-driven approach for predicting and analyzing skill shortages, which can assist policy-makers, educators, and businesses to prepare for the future of work.
To contain the pandemic of coronavirus (COVID-19) in Mainland China, the authorities have put in place a series of measures, including quarantines, social distancing, and travel restrictions. While these strategies have effectively dealt with the critical situations of outbreaks, the combination of the pandemic and mobility controls has slowed Chinas economic growth, resulting in the first quarterly decline of Gross Domestic Product (GDP) since GDP began to be calculated, in 1992. To characterize the potential shrinkage of the domestic economy, from the perspective of mobility, we propose two new economic indicators: the New Venues Created (NVC) and the Volumes of Visits to Venue (V^3), as the complementary measures to domestic investments and consumption activities, using the data of Baidu Maps. The historical records of these two indicators demonstrated strong correlations with the past figures of Chinese GDP, while the status quo has dramatically changed this year, due to the pandemic. We hereby presented a quantitative analysis to project the impact of the pandemic on economies, using the recent trends of NVC and V^3. We found that the most affected sectors would be travel-dependent businesses, such as hotels, educational institutes, and public transportation, while the sectors that are mandatory to human life, such as workplaces, residential areas, restaurants, and shopping sites, have been recovering rapidly. Analysis at the provincial level showed that the self-sufficient and self-sustainable economic regions, with internal supplies, production, and consumption, have recovered faster than those regions relying on global supply chains.
Various measures have been taken in different countries to mitigate the Covid-19 epidemic. But, throughout the world, many citizens dont understand well how these measures are taken and even question the decisions taken by their government. Should the measures be more (or less) restrictive? Are they taken for a too long (or too short) period of time? To provide some quantitative elements of response to these questions, we consider the well-known SEIR model for the Covid-19 epidemic propagation and propose a pragmatic model of the government decision-making operation. Although simple and obviously improvable, the proposed model allows us to study the tradeoff between health and economic aspects in a pragmatic and insightful way. Assuming a given number of phases for the epidemic and a desired tradeoff between health and economic aspects, it is then possible to determine the optimal duration of each phase and the optimal severity level for each of them. The numerical analysis is performed for the case of France but the adopted approach can be applied to any country. One of the takeaway messages of this analysis is that being able to implement the optimal 4-phase epidemic management strategy in France would have led to 1.05 million infected people and a GDP loss of 231 billion euro instead of 6.88 million of infected and a loss of 241 billion euro. This indicates that, seen from the proposed model perspective, the effectively implemented epidemic management strategy is good economically, whereas substantial improvements might have been obtained in terms of health impact. Our analysis indicates that the lockdown/severe phase should have been more severe but shorter, and the adjustment phase occurred earlier. Due to the natural tendency of people to deviate from the official rules, updating measures every month over the whole epidemic episode seems to be more appropriate.
Humanity has been fascinated by the pursuit of fortune since time immemorial, and many successful outcomes benefit from strokes of luck. But success is subject to complexity, uncertainty, and change - and at times becoming increasingly unequally distributed. This leads to tension and confusion over to what extent people actually get what they deserve (i.e., fairness/meritocracy). Moreover, in many fields, humans are over-confident and pervasively confuse luck for skill (I win, its skill; I lose, its bad luck). In some fields, there is too much risk taking; in others, not enough. Where success derives in large part from luck - and especially where bailouts skew the incentives (heads, I win; tails, you lose) - it follows that luck is rewarded too much. This incentivizes a culture of gambling, while downplaying the importance of productive effort. And, short term success is often rewarded, irrespective, and potentially at the detriment, of the long-term system fitness. However, much success is truly meritocratic, and the problem is to discern and reward based on merit. We call this the fair reward problem. To address this, we propose three different measures to assess merit: (i) raw outcome; (ii) risk adjusted outcome, and (iii) prospective. We emphasize the need, in many cases, for the deductive prospective approach, which considers the potential of a system to adapt and mutate in novel futures. This is formalized within an evolutionary system, comprised of five processes, inter alia handling the exploration-exploitation trade-off. Several human endeavors - including finance, politics, and science -are analyzed through these lenses, and concrete solutions are proposed to support a prosperous and meritocratic society.
There is a random variable (X) with a determined outcome (i.e., X = x0), p(x0) = 1. Consider x0 to have a discrete uniform distribution over the integer interval [1, s], where the size of the sample space (s) = 1, in the initial state, such that p(x0) = 1. What is the probability of x0 and the associated information entropy (H), as s increases by means of different functional forms of expansion? Such a process has been characterised in the case of (1) a mono-exponential expansion of the sample space; (2) a power function expansion; (3) double exponential expansion. The double exponential expansion of the sample space with time (from a natural log relationship between t and n) describes a hyperinflationary process. Over the period from the middle of 1920 to the end of 1923, the purchasing power of the Weimar Republic paper Mark to purchase one gold Mark became close to zero (1 paper Mark = 10 to the power of -12 gold Mark). From the purchasing power of the paper Mark to purchase one gold Mark, the information entropy of this hyperinflationary process was determined.