No Arabic abstract
Recent studies have established that extreme dwarf galaxies --whether satellites or field objects-- suffer from the so called too big to fail (TBTF) problem. Put simply, the TBTF problem consists of the fact that it is difficult to explain both the measured kinematics of dwarfs and their observed number density within the LCDM framework. The most popular proposed solutions to the problem involve baryonic feedback processes. For example, reionization and baryon depletion can decrease the abundance of halos that are expected to host dwarf galaxies. Moreover, feedback related to star formation can alter the dark matter density profile in the central regions of low-mass halos. In this article we assess the TBTF problem for field dwarfs, taking explicitly into account the baryonic effects mentioned above. We find that 1) reionization feedback cannot resolve the TBTF problem on its own, because the halos in question are too massive to be affected by it, and that 2) the degree to which profile modification can be invoked as a solution to the TBTF problem depends on the radius at which galactic kinematics are measured. Based on a literature sample of about 90 dwarfs with interferometric observations in the 21cm line of atomic hydrogen (HI), we conclude that the TBTF problem persists despite baryonic effects. However, the preceding statement assumes that the sample under consideration is representative of the general population of field dwarfs. In addition, the unexplained excess of dwarf galaxies in LCDM could be as small as a factor of ~ 1.8, given the current uncertainties in the measurement of the galactic velocity function. Both of these caveats highlight the importance of upcoming uniform surveys with HI interferometers for advancing our understanding of the issue.
We use a semi-analytical model for the substructure of dark matter haloes to assess the too-big-to-fail (TBTF) problem. The model accurately reproduces the average subhalo mass and velocity functions, as well as their halo-to-halo variance, in N-body simulations. We construct thousands of realizations of Milky Way (MW) size host haloes, allowing us to investigate the TBTF problem with unprecedented statistical power. We examine the dependence on host halo mass and cosmology, and explicitly demonstrate that a reliable assessment of TBTF requires large samples of hundreds of host haloes. We argue that previous statistics used to address TBTF suffer from the look-elsewhere effect and/or disregard certain aspects of the data on the MW satellite population. We devise a new statistic that is not hampered by these shortcomings, and, using only data on the 9 known MW satellite galaxies with $V_{rm max}>15{rm kms}^{-1}$, demonstrate that $1.4^{+3.3}_{-1.1}%$ of MW-size host haloes have a subhalo population in statistical agreement with that of the MW. However, when using data on the MW satellite galaxies down to $V_{rm max}=8{rm kms}^{-1}$, this MW consistent fraction plummets to $<5times10^{-4}$ (at 68% CL). Hence, if it turns out that the inventory of MW satellite galaxies is complete down to 8km/s, then the maximum circular velocities of MW satellites are utterly inconsistent with $Lambda$CDM predictions, unless baryonic effects can drastically increase the spread in $V_{rm max}$ values of satellite galaxies compared to that of their subhaloes.
We use the Arecibo legacy fast ALFA (ALFALFA) 21cm survey to measure the number density of galaxies as a function of their rotational velocity, $V_mathrm{rot,HI}$ (as inferred from the width of their 21cm emission line). Based on the measured velocity function we statistically connect galaxies with their host halo, via abundance matching. In a lambda cold dark matter ($Lambda$CDM) cosmology, dwarf galaxies are expected to be hosted by halos that are significantly more massive than indicated by the measured galactic velocity; if smaller halos were allowed to host galaxies, then ALFALFA would measure a much higher galactic number density. We then seek observational verification of this predicted trend by analyzing the kinematics of a literature sample of gas-rich dwarf galaxies. We find that galaxies with $V_mathrm{rot,HI} lesssim 25$ $mathrm{km} , mathrm{s}^{-1}$ are kinematically incompatible with their predicted $Lambda$CDM host halos, in the sense that hosts are too massive to be accommodated within the measured galactic rotation curves. This issue is analogous to the too big to fail problem faced by the bright satellites of the Milky Way, but here it concerns extreme dwarf galaxies in the field. Consequently, solutions based on satellite-specific processes are not applicable in this context. Our result confirms the findings of previous studies based on optical survey data and addresses a number of observational systematics present in these works. Furthermore, we point out the assumptions and uncertainties that could strongly affect our conclusions. We show that the two most important among them -namely baryonic effects on the abundances of halos and on the rotation curves of halos- do not seem capable of resolving the reported discrepancy.
The faintness of satellite systems in galaxy groups has contributed to the widely discussed missing satellite and too big to fail issues. Using techniques based on Tremaine & Richstone (1977), we show that there is no problem with the luminosity function computed from modern codes per se, but that the gap between first and second brightest systems is too big {it given} the luminosity function, that the same large gap is found in modern, large scale baryonic $Lambda$CDM simulations such as EAGLE and IllustrisTNG, is even greater in dark matter only simulations, and finally, that this is most likely due to gravitationally induced merging caused by classical dynamical friction. Quantitatively the gap is larger in the computed simulations than in the randomized ones by $1.79 pm 1.04$, $1.51 pm 0.93$, $3.43 pm 1.44$ and $3.33 pm 1.35$ magnitudes in the EAGLE, IllustrisTNG, and dark matter only simulations of EAGLE and IllustrisTNG respectively. Furthermore the anomalous gaps in the simulated systems are even larger than in the real data by over half a magnitude and are still larger in the dark matter only simulations. Briefly stated, $Lambda$CDM does not have a problem with an absence of too big to fail galaxies. Statistically significant large gaps between first and second brightest galaxies are to be expected.
N-body dark matter simulations of structure formation in the $Lambda$CDM model predict a population of subhalos within Galactic halos that have higher central densities than inferred for satellites of the Milky Way, a tension known as the `too big to fail problem. Proposed solutions include baryonic effects, a smaller mass for the Milky Way halo, and warm dark matter. We test these three possibilities using a semi-analytic model of galaxy formation to generate luminosity functions for Milky Way halo-analogue satellite populations, the results of which are then coupled to the Jiang & van den Bosch model of subhalo stripping to predict the subhalo $V_mathrm{max}$ functions for the 10 brightest satellites. We find that selecting the brightest satellites (as opposed to the most massive) and modelling the expulsion of gas by supernovae at early times increases the likelihood of generating the observed Milky Way satellite $V_mathrm{max}$ function. The preferred halo mass is $6times10^{11}M_{odot}$, which has a 14 percent probability to host a $V_mathrm{max}$ function like that of the Milky Way satellites. This probability is reduced to 8 percent for a $1.0times10^{12}M_{odot}$ halo and to 3 percent for a $1.4times10^{12}M_{odot}$ halo. We conclude that the Milky Way satellite $V_mathrm{max}$ function is compatible with a CDM cosmology, as previously found by Sawala et al. using hydrodynamic simulations. Sterile neutrino-warm dark matter models achieve a higher degree of agreement with the observations, with a maximum 35 percent chance of generating the observed Milky Way satellite $V_mathrm{max}$ function. However, more work is required to check that the semi-analytic stripping model is calibrated correctly in the sterile neutrino cosmology, and to check if our sterile neutrino models produce sufficient numbers of faint satellites.
Solomon and Golo [1] have recently proposed an autocatalytic (self-reinforcing) feedback model which couples a macroscopic system parameter (the interest rate), a microscopic parameter that measures the distribution of the states of the individual agents (the number of firms in financial difficulty) and a peer-to-peer network effect (contagion across supply chain financing). In this model, each financial agent is characterized by its resilience to the interest rate. Above a certain rate the interest due on the firms financial costs exceeds its earnings and the firm becomes susceptible to failure (ponzi). For the interest rate levels under a certain threshold level, the firm loans are smaller then its earnings and the firm becomes hedge. In this paper, we fit the historical data (2002-2009) on interest rate data into our model, in order to predict the number of the ponzi firms. We compare the prediction with the data taken from a large panel of Italian firms over a period of 9 years. We then use trade credit linkages to discuss the connection between the ponzi density and the network percolation. We find that the top-down-bottom-up positive feedback loop accounts for most of the Minsky crisis accelerator dynamics. The peer-to-peer ponzi companies contagion becomes significant only in the last stage of the crisis when the ponzi density is above a critical value. Moreover the ponzi contagion is limited only to the companies that were not dynamic enough to substitute their distressed clients with new ones. In this respect the data support a view in which the success of the economy depends on substituting the static supply-network picture with an interacting dynamic agents one.