No Arabic abstract
We analyze properties of apportionment functions in context of the problem of allocating seats in the European Parliament. Necessary and sufficient conditions for apportionment functions are investigated. Some exemplary families of apportionment functions are specified and the corresponding partitions of the seats in the European Parliament among the Member States of the European Union are presented. Although the choice of the allocation functions is theoretically unlimited, we show that the constraints are so strong that the acceptable functions lead to rather similar solutions.
The rapid spread of radical ideologies has led to a world-wide succession of terrorist attacks in recent years. Understanding how extremist tendencies germinate, develop, and drive individuals to action is important from a cultural standpoint, but also to help formulate response and prevention strategies. Demographic studies, interviews with radicalized subjects, analysis of terrorist databases, reveal that the path to radicalization occurs along progressive steps, where age, social context and peer-to-peer exchanges play major roles. To execute terrorist attacks, radicals must efficiently communicate with one another while maintaining secrecy; they are also subject to pressure from counter-terrorism agencies, public opinion and the need for material resources. Similarly, government entities must gauge which intervention methods are most effective. While a complete understanding of the processes that lead to extremism and violence, and of which deterrents are optimal, is still lacking, mathematical modelers have contributed to the discourse by using tools from statistical mechanics and applied mathematics to describe existing and novel paradigms, and to propose novel counter-terrorism strategies. We review some of their approaches in this work, including compartment models for populations of increasingly extreme views, continuous time models for age-structured radical populations, radicalization as social contagion processes on lattices and social networks, agent based models, game theoretic formulations. We highlight the useful insights offered by analyzing radicalization and terrorism through quantitative frameworks. Finally, we discuss the role of institutional intervention and the stages at which de-radicalization strategies might be most effective.
(shortened version) Religions and languages are social variables, like age, sex, wealth or political opinions, to be studied like any other organizational parameter. In fact, religiosity is one of the most important sociological aspects of populations. Languages are also a characteristics of the human kind. New religions, new languages appear though others disappear. All religions and languages evolve when they adapt to the society developments. On the other hand, the number of adherents of a given religion, the number of persons speaking a language is not fixed. Several questions can be raised. E.g. from a macroscopic point of view : How many religions/languages exist at a given time? What is their distribution? What is their life time? How do they evolve?. From a microscopic view point: can one invent agent based models to describe macroscopic aspects? Does it exist simple evolution equations? It is intuitively accepted, but also found through from statistical analysis of the frequency distribution that an attachment process is the primary cause of the distribution evolution : usually the initial religion/language is that of the mother. Later on, changes can occur either due to heterogeneous agent interaction processes or due to external field constraints, - or both. Such cases can be illustrated with historical facts and data. It is stressed that characteristic time scales are different, and recalled that external fields are very relevant in the case of religions, rending the study more interesting within a mechanistic approach
Primary law of the European Union demands that the allocation of the seats of the European Parliament between the Member States must obey the principle of degressive proportionality. The principle embodies the political aim that the more populous states agree to be underrepresented in order to allow the less populous states to be better represented. This paper reviews four allocation methods achieving this goal: the Cambridge Compromise, the Power Compromise, the Modified Cambridge Compromise, and the 0.5-DPL Method. After a year of committee deliberations, Parliament decreed on 7 February 2018 an allocation of seats for the 2019 elections that realizes degressive proportionality, but otherwise lacks methodological grounding. The allocation emerged from haggling and bargaining behind closed doors.
We use quantum graphs as a model to study various mathematical aspects of the vacuum energy, such as convergence of periodic path expansions, consistency among different methods (trace formulae versus method of images) and the possible connection with the underlying classical dynamics. We derive an expansion for the vacuum energy in terms of periodic paths on the graph and prove its convergence and smooth dependence on the bond lengths of the graph. For an important special case of graphs with equal bond lengths, we derive a simpler explicit formula. The main results are derived using the trace formula. We also discuss an alternative approach using the method of images and prove that the results are consistent. This may have important consequences for other systems, since the method of images, unlike the trace formula, includes a sum over special ``bounce paths. We succeed in showing that in our model bounce paths do not contribute to the vacuum energy. Finally, we discuss the proposed possible link between the magnitude of the vacuum energy and the type (chaotic vs. integrable) of the underlying classical dynamics. Within a random matrix model we calculate the variance of the vacuum energy over several ensembles and find evidence that the level repulsion leads to suppression of the vacuum energy.
We study empirically how the fame of WWI fighter-pilot aces, measured in numbers of web pages mentioning them, is related to their achievement, measured in numbers of opponent aircraft destroyed. We find that on the average fame grows exponentially with achievement; the correlation coefficient between achievement and the logarithm of fame is 0.72. The number of people with a particular level of achievement decreases exponentially with the level, leading to a power-law distribution of fame. We propose a stochastic model that can explain the exponential growth of fame with achievement. Next, we hypothesize that the same functional relation between achievement and fame that we found for the aces holds for other professions. This allows us to estimate achievement for professions where an unquestionable and universally accepted measure of achievement does not exist. We apply the method to Nobel Prize winners in Physics. For example, we obtain that Paul Dirac, who is a hundred times less famous than Einstein contributed to physics only two times less. We compare our results with Landaus ranking.