ترغب بنشر مسار تعليمي؟ اضغط هنا

We investigate the formation history of massive disk galaxies in hydro-dynamical simulation--the IllustrisTNG, to study why massive disk galaxies survive through cosmic time. 83 galaxies in the simulation are selected with M$_{*,z=0}$ $>8times10^{10} $ M$_odot$ and kinematic bulge-to-total ratio less than $0.3$. We find that 8.4 percent of these massive disk galaxies have quiet merger histories and preserve disk morphology since formed. 54.2 percent have a significant increase in bulge components in history, then become disks again till present time. The rest 37.3 percent experience prominent mergers but survive to remain disky. While mergers and even major mergers do not always turn disk galaxies into ellipticals, we study the relations between various properties of mergers and the morphology of merger remnants. We find a strong dependence of remnant morphology on the orbit type of major mergers. Specifically, major mergers with a spiral-in falling orbit mostly lead to disk-dominant remnants, and major mergers of head-on galaxy-galaxy collision mostly form ellipticals. This dependence of remnant morphology on orbit type is much stronger than the dependence on cold gas fraction or orbital configuration of merger system as previously studied.
The Kostka semigroup consists of pairs of partitions with at most r parts that have positive Kostka coefficient. For this semigroup, Hilbert basis membership is an NP-complete problem. We introduce KGR graphs and conservative subtrees, through the Ga le-Ryser theorem on contingency tables, as a criterion for membership. In our main application, we show that if a partition pair is in the Hilbert basis then the partitions are at most r wide. We also classify the extremal rays of the associated polyhedral cone; these rays correspond to a (strict) subset of the Hilbert basis. In an appendix, the second and third authors show that a natural extension of our main result on the Kostka semigroup cannot be extended to the Littlewood-Richardson semigroup. This furthermore gives a counterexample to a recent speculation of P. Belkale concerning the semigroup controlling nonvanishing conformal blocks.
We provide a non-recursive, combinatorial classification of multiplicity-free skew Schur polynomials. These polynomials are $GL_n$, and $SL_n$, characters of the skew Schur modules. Our result extends work of H. Thomas--A. Yong, and C. Gutschwager, i n which they classify the multiplicity-free skew Schur functions.
The Newell-Littlewood numbers $N_{mu, u,lambda}$ are tensor product multiplicities of Weyl modules for classical Lie groups, in the stable limit. For which triples of partitions $(mu, u,lambda)$ does $N_{mu, u,lambda}>0$ hold? The Littlewood-Richards on coefficient case is solved by the Horn inequalities (in work of A. Klyachko and A. Knutson-T. Tao). We extend these celebrated linear inequalities to a much larger family, suggesting a general solution.
80 - Junyi Jia , Liang Gao , Yan Qu 2020
We perform a set of non-radiative hydro-dynamical (NHD) simulations of a rich cluster sized dark matter halo from the Phoenix project with 3 different numerical resolutions, to investigate the effect of hydrodynamics alone on the subhalo population i n the halo. Compared to dark matter only (DMO) simulations of the same halo, subhaloes are less abundant for relatively massive subhaloes ($M_{sub} > 2.5 times 10^9h^{-1}M_{odot}$, or $V_{max} > 70 kms^{-1}$) but more abundant for less massive subhaloes in the NHD simulations. This results in different shapes in the subhalo mass/$V_{max}$ function in two different sets of simulations. At given subhalo mass, the subhaloes less massive than $10^{10} h^{-1}M_{odot}$ have larger $V_{max}$ in the NHD than DMO simulations, while $V_{max}$ is similar for the subhaloes more massive than the mass value. This is mainly because the progenitors of present day low mass subhaloes have larger concentration parameters in the NHD than DMO simulations. The survival number fraction of the accreted low mass progenitors of the main halo at redshift 2 is about 50 percent higher in the NHD than DMO simulations.
As an important component of multimedia analysis tasks, audio classification aims to discriminate between different audio signal types and has received intensive attention due to its wide applications. Generally speaking, the raw signal can be transf ormed into various representations (such as Short Time Fourier Transform and Mel Frequency Cepstral Coefficients), and information implied in different representations can be complementary. Ensembling the models trained on different representations can greatly boost the classification performance, however, making inference using a large number of models is cumbersome and computationally expensive. In this paper, we propose a novel end-to-end collaborative learning framework for the audio classification task. The framework takes multiple representations as the input to train the models in parallel. The complementary information provided by different representations is shared by knowledge distillation. Consequently, the performance of each model can be significantly promoted without increasing the computational overhead in the inference stage. Extensive experimental results demonstrate that the proposed approach can improve the classification performance and achieve state-of-the-art results on both acoustic scene classification tasks and general audio tagging tasks.
The thermal history of cosmic gas in the Dark Ages remains largely unknown. It is important to quantify the impact of relevant physics on the IGM temperature between $z=10$ and $z sim 30$, in order to interpret recent and oncoming observations, inclu ding results reported by EDGES. We revisit the gas heating due to structure formation shocks in this era, using a set of fixed grid cosmological hydrodynamical simulations performed by three different codes. In all our simulations, the cosmic gas is predicted to be in multiphase state since $z>30$. The gas surrounding high density peaks gradually develops a relation more sharp than $T propto rho^{2/3}$, approximately $T propto rho^{2}$, from $z=30$ to $z=11$, might due to shock heating. Meanwhile, the gas in void region tends to have a large local mach number, and their thermal state varies significantly from code to code. In the redshift range $11-20$, the mass fraction of gas shock heated above the CMB temperature in our simulations is larger than previous semi-analytical results by a factor of 2 to 8. At $z=15$, the fraction varies from $sim 19%$ to $52 %$ among different codes. Between $z=11$ and $z=20$, the gas temperature $<1/T_{rm{K}}>_M^{-1}$ is predicted to be $sim 10-20$ K by two codes, much higher than the adiabatic cooling model and some previous works. However, in our simulations performed by RAMSES, $<1/T_{rm{K}}>_M^{-1}$ is predicted to be even below the temperature required to explain result of the EDGES. Given the fact that different codes give different predictions, currently, it seems a challenge to make solid prediction on the temperature of gas at $z sim 17$ in simulations.
We investigate the formation of ultra-diffuse galaxies (UDGs) using the Auriga high-resolution cosmological magneto-hydrodynamical simulations of Milky Way-sized galaxies. We identify a sample of $92$ UDGs in the simulations that match a wide range o f observables such as sizes, central surface brightness, S{e}rsic indices, colors, spatial distribution and abundance. Auriga UDGs have dynamical masses similar to normal dwarfs. In the field, the key to their origin is a strong correlation present in low-mass dark matter haloes between galaxy size and halo spin parameter. Field UDGs form in dark matter haloes with larger spins compared to normal dwarfs in the field, in agreement with previous semi-analytical models. Satellite UDGs, on the other hand, have two different origins: $sim 55%$ of them formed as field UDGs before they were accreted; the remaining $sim 45%$ were normal field dwarfs that subsequently turned into UDGs as a result of tidal interactions.
Background: Metabolomics datasets are becoming increasingly large and complex, with multiple types of algorithms and workflows needed to process and analyse the data. A cloud infrastructure with portable software tools can provide much needed resourc es enabling faster processing of much larger datasets than would be possible at any individual lab. The PhenoMeNal project has developed such an infrastructure, allowing users to run analyses on local or commercial cloud platforms. We have examined the computational scaling behaviour of the PhenoMeNal platform using four different implementations across 1-1000 virtual CPUs using two common metabolomics tools. Results: Our results show that data which takes up to 4 days to process on a standard desktop computer can be processed in just 10 min on the largest cluster. Improved runtimes come at the cost of decreased efficiency, with all platforms falling below 80% efficiency above approximately 1/3 of the maximum number of vCPUs. An economic analysis revealed that running on large scale cloud platforms is cost effective compared to traditional desktop systems. Conclusions: Overall, cloud implementations of PhenoMeNal show excellent scalability for standard metabolomics computing tasks on a range of platforms, making them a compelling choice for research computing in metabolomics.
63 - Chen Sun , Ye Tian , Liang Gao 2019
Calibration models have been developed for determination of trace elements, silver for instance, in soil using laser-induced breakdown spectroscopy (LIBS). The major concern is the matrix effect. Although it affects the accuracy of LIBS measurements in a general way, the effect appears accentuated for soil because of large variation of chemical and physical properties among different soils. The purpose is to reduce its influence in such way an accurate and soil-independent calibration model can be constructed. At the same time, the developed model should efficiently reduce experimental fluctuations affecting measurement precision. A univariate model first reveals obvious influence of matrix effect and important experimental fluctuation. A multivariate model has been then developed. A key point is the introduction of generalized spectrum where variables representing the soil type are explicitly included. Machine learning has been used to develop the model. After a necessary pretreatment where a feature selection process reduces the dimension of raw spectrum accordingly to the number of available spectra, the data have been fed in to a back-propagation neuronal networks (BPNN) to train and validate the model. The resulted soilindependent calibration model allows average relative error of calibration (REC) and average relative error of prediction (REP) within the range of 5-6%.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا