ترغب بنشر مسار تعليمي؟ اضغط هنا

Statistics of galaxy mergers: bridging the gap between theory and observation

50   0   0.0 ( 0 )
 نشر من قبل Filip Hu\\v{s}ko
 تاريخ النشر 2021
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

We present a study of galaxy mergers up to $z=10$ using the Planck Millennium cosmological dark matter simulation and the {tt GALFORM} semi-analytical model of galaxy formation. Utilising the full ($800$ Mpc)$^3$ volume of the simulation, we studied the statistics of galaxy mergers in terms of merger rates and close pair fractions. We predict that merger rates begin to drop rapidly for high-mass galaxies ($M_*>10^{11.3}-10^{10.5}$ $M_odot$ for $z=0-4$), as a result of the exponential decline in the galaxy stellar mass function. The predicted merger rates increase and then turn over with increasing redshift, in disagreement with the Illustris and EAGLE hydrodynamical simulations. In agreement with most other models and observations, we find that close pair fractions flatten or turn over at some redshift (dependent on the mass selection). We conduct an extensive comparison of close pair fractions, and highlight inconsistencies among models, but also between different observations. We provide a fitting formula for the major merger timescale for close galaxy pairs, in which the slope of the stellar mass dependence is redshift dependent. This is in disagreement with previous theoretical results that implied a constant slope. Instead we find a weak redshift dependence only for massive galaxies ($M_*>10^{10}$ M$_odot$): in this case the merger timescale varies approximately as $M_*^{-0.55}$. We find that close pair fractions and merger timescales depend on the maximum projected separation as $r_mathrm{max}^{1.35}$. This is in agreement with observations of small-scale clustering of galaxies, but is at odds with the linear dependence on projected separation that is often assumed.



قيم البحث

اقرأ أيضاً

We study the radial and azimuthal mass distribution of the lensing galaxy in WFI2033-4723. Mindful of the fact that modeling results depend on modeling assumptions, we examine two very different recent models: simply parametrized (SP) models from the H0LiCOW collaboration, and pixelated free-form (FF) GLASS models. In addition, we fit our own models which are a compromise between the astrophysical grounding of SP, and the flexibility of FF approaches. Our models consist of two offset parametric mass components, and generate many solutions, all fitting the quasar point image data. Among other results, we show that to reproduce point image properties the lensing mass must be lopsided, but the origin of this asymmetry can reside in the main lens plane or along the line of sight. We also show that there is a degeneracy between the slope of the density profile and the magnitude of external shear, and that the models from various modeling approaches are connected not by the mass sheet degeneracy, but by a more generalized transformation. Finally, we discuss interpretation degeneracy which afflicts all mass modeling: inability to correctly assign mass to the main lensing galaxy vs. nearby galaxies or line of sight structures. While this may not be a problem for the determination of $H_0$, interpretation degeneracy may become a major issue for the detailed study of galaxy structure.
Our daily human life is filled with a myriad of joint action moments, be it children playing, adults working together (i.e., team sports), or strangers navigating through a crowd. Joint action brings individuals (and embodiment of their emotions) tog ether, in space and in time. Yet little is known about how individual emotions propagate through embodied presence in a group, and how joint action changes individual emotion. In fact, the multi-agent component is largely missing from neuroscience-based approaches to emotion, and reversely joint action research has not found a way yet to include emotion as one of the key parameters to model socio-motor interaction. In this review, we first identify the gap and then stockpile evidence showing strong entanglement between emotion and acting together from various branches of sciences. We propose an integrative approach to bridge the gap, highlight five research avenues to do so in behavioral neuroscience and digital sciences, and address some of the key challenges in the area faced by modern societies.
Despite recent advances in its theoretical understanding, there still remains a significant gap in the ability of existing PAC-Bayesian theories on meta-learning to explain performance improvements in the few-shot learning setting, where the number o f training examples in the target tasks is severely limited. This gap originates from an assumption in the existing theories which supposes that the number of training examples in the observed tasks and the number of training examples in the target tasks follow the same distribution, an assumption that rarely holds in practice. By relaxing this assumption, we develop two PAC-Bayesian bounds tailored for the few-shot learning setting and show that two existing meta-learning algorithms (MAML and Reptile) can be derived from our bounds, thereby bridging the gap between practice and PAC-Bayesian theories. Furthermore, we derive a new computationally-efficient PACMAML algorithm, and show it outperforms existing meta-learning algorithms on several few-shot benchmark datasets.
We demonstrate that the choice of optimizer, neural network architecture, and regularizer significantly affect the adversarial robustness of linear neural networks, providing guarantees without the need for adversarial training. To this end, we revis it a known result linking maximally robust classifiers and minimum norm solutions, and combine it with recent results on the implicit bias of optimizers. First, we show that, under certain conditions, it is possible to achieve both perfect standard accuracy and a certain degree of robustness, simply by training an overparametrized model using the implicit bias of the optimization. In that regime, there is a direct relationship between the type of the optimizer and the attack to which the model is robust. To the best of our knowledge, this work is the first to study the impact of optimization methods such as sign gradient descent and proximal methods on adversarial robustness. Second, we characterize the robustness of linear convolutional models, showing that they resist attacks subject to a constraint on the Fourier-$ell_infty$ norm. To illustrate these findings we design a novel Fourier-$ell_infty$ attack that finds adversarial examples with controllable frequencies. We evaluate Fourier-$ell_infty$ robustness of adversarially-trained deep CIFAR-10 models from the standard RobustBench benchmark and visualize adversarial perturbations.
Sampling is a critical operation in the training of Graph Neural Network (GNN) that helps reduce the cost. Previous works have explored improving sampling algorithms through mathematical and statistical methods. However, there is a gap between sampli ng algorithms and hardware. Without consideration of hardware, algorithm designers merely optimize sampling at the algorithm level, missing the great potential of promoting the efficiency of existing sampling algorithms by leveraging hardware features. In this paper, we first propose a unified programming model for mainstream sampling algorithms, termed GNNSampler, covering the key processes for sampling algorithms in various categories. Second, we explore the data locality among nodes and their neighbors (i.e., the hardware feature) in real-world datasets for alleviating the irregular memory access in sampling. Third, we implement locality-aware optimizations in GNNSampler for diverse sampling algorithms to optimize the general sampling process in the training of GNN. Finally, we emphatically conduct experiments on large graph datasets to analyze the relevance between the training time, model accuracy, and hardware-level metrics, which helps achieve a good trade-off between time and accuracy in GNN training. Extensive experimental results show that our method is universal to mainstream sampling algorithms and reduces the training time of GNN (range from 4.83% with layer-wise sampling to 44.92% with subgraph-based sampling) with comparable accuracy.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا