ترغب بنشر مسار تعليمي؟ اضغط هنا

This work presents new parallelizable numerical schemes for the integration of Dissipative Particle Dynamics with Energy conservation (DPDE). So far, no numerical scheme introduced in the literature is able to correctly preserve the energy over long times and give rise to small errors on average properties for moderately small timesteps, while being straightforwardly parallelizable. We present in this article two new methods, both straightforwardly parallelizable, allowing to correctly preserve the total energy of the system. We illustrate the accuracy and performance of these new schemes both on equilibrium and nonequilibrium parallel simulations.
We present a permutation-invariant distance between atomic configurations, defined through a functional representation of atomic positions. This distance enables to directly compare different atomic environments with an arbitrary number of particles, without going through a space of reduced dimensionality (i.e. fingerprints) as an intermediate step. Moreover, this distance is naturally invariant through permutations of atoms, avoiding the time consuming associated minimization required by other common criteria (like the Root Mean Square Distance). Finally, the invariance through global rotations is accounted for by a minimization procedure in the space of rotations solved by Monte Carlo simulated annealing. A formal framework is also introduced, showing that the distance we propose verifies the property of a metric on the space of atomic configurations. Two examples of applications are proposed. The first one consists in evaluating faithfulness of some fingerprints (or descriptors), i.e. their capacity to represent the structural information of a configuration. The second application concerns structural analysis, where our distance proves to be efficient in discriminating different local structures and even classifying their degree of similarity.
Radiation damage to space-based Charge-Coupled Device (CCD) detectors creates defects which result in an increasing Charge Transfer Inefficiency (CTI) that causes spurious image trailing. Most of the trailing can be corrected during post-processing, by modelling the charge trapping and moving electrons back to where they belong. However, such correction is not perfect -- and damage is continuing to accumulate in orbit. To aid future development, we quantify the limitations of current approaches, and determine where imperfect knowledge of model parameters most degrade measurements of photometry and morphology. As a concrete application, we simulate $1.5times10^{9}$ worst case galaxy and $1.5times10^{8}$ star images to test the performance of the Euclid visual instrument detectors. There are two separable challenges: If the model used to correct CTI is perfectly the same as that used to add CTI, $99.68$ % of spurious ellipticity is corrected in our setup. This is because readout noise is not subject to CTI, but gets over-corrected during correction. Second, if we assume the first issue to be solved, knowledge of the charge trap density within $Deltarho/rho!=!(0.0272pm0.0005)$ %, and the characteristic release time of the dominant species to be known within $Deltatau/tau!=!(0.0400pm0.0004)$ % will be required. This work presents the next level of definition of in-orbit CTI calibration procedures for Euclid.
162 - Frederic Bournaud 2015
The role of disk instabilities, such as bars and spiral arms, and the associated resonances, in growing bulges in the inner regions of disk galaxies have long been studied in the low-redshift nearby Universe. There it has long been probed observation ally, in particular through peanut-shaped bulges. This secular growth of bulges in modern disk galaxies is driven by weak, non-axisymmetric instabilities: it mostly produces pseudo-bulges at slow rates and with long star-formation timescales. Disk instabilities at high redshift (z>1) in moderate-mass to massive galaxies (10^10 to a few 10^11 Msun of stars) are very different from those found in modern spiral galaxies. High-redshift disks are globally unstable and fragment into giant clumps containing 10^8-10^9 Msun of gas and stars each, which results in highly irregular galaxy morphologies. The clumps and other features associated to the violent instability drive disk evolution and bulge growth through various mechanisms, on short timescales. The giant clumps can migrate inward and coalesce into the bulge in a few 10^8 yr. The instability in the very turbulent media drives intense gas inflows toward the bulge and nuclear region. Thick disks and supermassive black holes can grow concurrently as a result of the violent instability. This chapter reviews the properties of high-redshift disk instabilities, the evolution of giant clumps and other features associated to the instability, and the resulting growth of bulges and associated sub-galactic components.
In this paper, we extend some results proved in previous references for three-dimensional Navier-Stokes equations. We show that when the norm of the velocity field is small enough in $L^3({I!!R}^3)$, then a global smooth solution of the Navier-Stokes equations is ensured. We show that a similar result holds when the norm of the velocity field is small enough in $H^{frac{1}{2}}({I!!R}^3)$. The scale invariance of these two norms is discussed.
Cosmic shear is the distortion of images of distant galaxies due to weak gravitational lensing by the large-scale structure in the Universe. Such images are coherently deformed by the tidal field of matter inhomogeneities along the line of sight. By measuring galaxy shape correlations, we can study the properties and evolution of structure on large scales as well as the geometry of the Universe. Thus, cosmic shear has become a powerful probe into the nature of dark matter and the origin of the current accelerated expansion of the Universe. Over the last years, cosmic shear has evolved into a reliable and robust cosmological probe, providing measurements of the expansion history of the Universe and the growth of its structure. We review here the principles of weak gravitational lensing and show how cosmic shear is interpreted in a cosmological context. Then we give an overview of weak-lensing measurements, and present the main observational cosmic-shear results since it was discovered 15 years ago, as well as the implications for cosmology. We then conclude with an outlook on the various future surveys and missions, for which cosmic shear is one of the main science drivers, and discuss promising new weak cosmological lensing techniques for future observations.
297 - S. Lotti , D. Cea , C. Macculi 2014
Methods.There are no experimental data about the background experienced by microcalorimeters in the L2 orbit, and thus the particle background levels were calculated by means of Monte Carlo simulations: we considered the original design configuration and an improved configuration aimed to reduce the unrejected background, and tested them in the L2 orbit and in the low Earth orbit, comparing the results with experimental data reported by other X-ray instruments.To show the results obtainable with the improved configuration we simulated the observation of a faint, high-redshift, point source (F[0.5-10 keV]~6.4E-16 erg cm-2 s-1, z=3.7), and of a hot galaxy cluster at R200 (Sb[0.5-2 keV]=8.61E-16 erg cm-2 s-1 arcmin-2,T=6.6 keV). Results.First we confirm that implementing an active cryogenic anticoincidence reduces the particle background by an order of magnitude and brings it close to the required level.The implementation and test of several design solutions can reduce the particle background level by a further factor of 6 with respect to the original configuration.The best background level achievable in the L2 orbit with the implementation of ad-hoc passive shielding for secondary particles is similar to that measured in the more favorable LEO environment without the passive shielding, allowing us to exploit the advantages of the L2 orbit.We define a reference model for the diffuse background and collect all the available information on its variation with epoch and pointing direction.With this background level the ATHENA mission with the X-IFU instrument is able to detect ~4100 new obscured AGNs with F>6.4E-16 erg cm-2 s-1 during three years, to characterize cluster of galaxies with Sb(0.5-2 keV)>9.4E-16 erg cm-2 s-1 sr-1 on timescales of 50 ks (500 ks) with errors <40% (<12%) on metallicity,<16% (4.8%) on temperature,2.6% (0.72%) on the gas density, and several single-element abundances.
84 - G. Chabrier 2014
We examine variations of the stellar initial mass function (IMF) in extreme environments within the formalism derived by Hennebelle & Chabrier. We focus on conditions encountered in progenitors of massive early type galaxies and starburst regions. We show that, when applying the concept of turbulent Jeans mass as the characteristic mass for fragmentation in a turbulent medium, instead of the standard thermal Jeans mass for purely gravitational fragmentation, the peak of the IMF in such environments is shifted towards smaller masses, leading to a bottom-heavy IMF, as suggested by various observations. In very dense and turbulent environments, we predict that the high-mass tail of the IMF can become even steeper than the standard Salpeter IMF, with a limit for the power law exponent $alphasimeq -2.7$, in agreement with recent observational determinations. This steepening is a direct consequence of the high densities and Mach values in such regions but also of the time dependence of the fragmentation process, as incorporated in the Hennebelle-Chabrier theory. We provide analytical parametrizations of these IMFs in such environments, to be used in galaxy evolution calculations. We also calculate the star formation rates and the mass-to-light ratios expected under such extreme conditions and show that they agree well with the values inferred in starburst environments and massive high-redshift galaxies. This reinforces the paradigm of star formation as being a universal process, i.e. the direct outcome of gravitationally unstable fluctuations in a density field initially generated by large scale shock-dominated turbulence. This globally enables us to infer the variations of the stellar IMF and related properties for atypical galactic conditions.
67 - John A. Tomsick 2014
Here we report on Swift and Suzaku observations near the end of an outburst from the black hole transient 4U 1630-47 and Chandra observations when the source was in quiescence. 4U 1630-47 made a transition from a soft state to the hard state ~50 d af ter the main outburst ended. During this unusual delay, the flux continued to drop, and one Swift measurement found the source with a soft spectrum at a 2-10 keV luminosity of L = 1.07e35 erg/s for an estimated distance of 10 kpc. While such transients usually make a transition to the hard state at L/Ledd = 0.3-3%, where Ledd is the Eddington luminosity, the 4U 1630-47 spectrum remained soft at L/Ledd = 0.008/M10% (as measured in the 2-10 keV band), where M10 is the mass of the black hole in units of 10 solar masses. An estimate of the luminosity in the broader 0.5-200 keV bandpass gives L/Ledd = 0.03/M10%, which is still an order of magnitude lower than typical. We also measured an exponential decay of the X-ray flux in the hard state with an e-folding time of 3.39+/-0.06 d, which is much less than previous measurements of 12-15 d during decays by 4U 1630-47 in the soft state. With the ~100 ks Suzaku observation, we do not see evidence for a reflection component, and the 90% confidence limits on the equivalent width of a narrow iron Kalpha emission line are <40 eV for a narrow line and <100 eV for a line of any width, which is consistent with a change of geometry (either a truncated accretion disk or a change in the location of the hard X-ray source) in the hard state. Finally, we report a 0.5-8 keV luminosity upper limit of <2e32 erg/s in quiescence, which is the lowest value measured for 4U 1630-47 to date.
44 - Samuel Mimram 2014
String rewriting systems have proved very useful to study monoids. In good cases, they give finite presentations of monoids, allowing computations on those and their manipulation by a computer. Even better, when the presentation is confluent and term inating, they provide one with a notion of canonical representative of the elements of the presented monoid. Polygraphs are a higher-dimensional generalization of this notion of presentation, from the setting of monoids to the much more general setting of n-categories. One of the main purposes of this article is to give a progressive introduction to the notion of higher-dimensional rewriting system provided by polygraphs, and describe its links with classical rewriting theory, string and term rewriting systems in particular. After introducing the general setting, we will be interested in proving local confluence for polygraphs presenting 2-categories and introduce a framework in which a finite 3-dimensional rewriting system admits a finite number of critical pairs.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا