ترغب بنشر مسار تعليمي؟ اضغط هنا

116 - Andrew F. Nelson 2012
(shortened) We perform 3D hydrodynamic simulations of gas flowing around a planetary core of mass mplan=10me embedded in a near Keplerian background flow, using a modified shearing box approximation. We employ a nested grid hydrodynamic code with as many as six nested grids, providing spatial resolution on the finest grid comparable to the present day diameters of Neptune and Uranus. We find that a strongly dynamically active flow develops such that no static envelope can form. The activity is not sensitive to plausible variations in the rotation curve of the underlying disk. It is sensitive to the thermodynamic treatment of the gas, as modeled by prescribed equations of state (either `locally isothermal or `locally isentropic) and the temperature of the background disk material. The activity is also sensitive to the shape and depth of the cores gravitational potential, through its mass and gravitational softening coefficient. The varying flow pattern gives rise to large, irregular eruptions of matter from the region around the core which return matter to the background flow: mass in the envelope at one time may not be found in the envelope at any later time. The angular momentum of material in the envelope, relative to the core, varies both in magnitude and in sign on time scales of days to months near the core and on time scales a few years at distances comparable to the Hill radius. We show that material entering the dynamically active environment may suffer intense heating and cooling events the durations of which are as short as a few hours to a few days. Peak temperatures in these events range from $T sim 1000$ K to as high as $T sim 3-4000$ K, with densities $rhosim 10^{-9}-10^{-8}$ g/cm$^3$. These time scales, densities and temperatures span a range consistent with those required for chondrule formation in the nebular shock model.
We continue our presentation of VINE. We begin with a description of relevant architectural properties of the serial and shared memory parallel computers on which VINE is intended to run, and describe their influences on the design of the code itself . We continue with a detailed description of a number of optimizations made to the layout of the particle data in memory and to our implementation of a binary tree used to access that data for use in gravitational force calculations and searches for SPH neighbor particles. We describe modifications to the code necessary to obtain forces efficiently from special purpose `GRAPE hardware. We conclude with an extensive series of performance tests, which demonstrate that the code can be run efficiently and without modification in serial on small workstations or in parallel using OpenMP compiler directives on large scale, shared memory parallel machines. We analyze the effects of the code optimizations and estimate that they improve its overall performance by more than an order of magnitude over that obtained by many other tree codes. Scaled parallel performance of the gravity and SPH calculations, together the most costly components of most simulations, is nearly linear up to maximum machine sizes available to us (120 processors on an Origin~3000). At similar accuracy, performance of VINE, used in GRAPE-tree mode, is approximately a factor two slower than that of VINE, used in host-only mode. Optimizations of the GRAPE/host communications could improve the speed by as much as a factor of three, but have not yet been implemented in VINE.
35 - M. Wetzstein 2009
We present a Fortran 95 code for simulating the evolution of astrophysical systems using particles to represent the underlying fluid flow. The code is designed to be versatile, flexible and extensible, with modular options that can be selected either at compile time or at run time. We include a number of general purpose modules describing a variety of physical processes commonly required in the astrophysical community. The code can be used as an N-body code to evolve a set of particles in two or three dimensions using either a Leapfrog or Runge-Kutta-Fehlberg integrator, with or without individual timesteps for each particle. Particles may interact gravitationally as $N$-body particles, and all or any subset may also interact hydrodynamically, using the Smoothed Particle Hydrodynamic (SPH) method. Massive point particles (`stars) which may accrete nearby SPH or $N$-body particles may also be included. The default free boundary conditions can be replaced by a module to include periodic boundaries. Cosmological expansion may also be included. An interface with special purpose `GRAPE hardware may also be selected. If available, forces obtained from the GRAPE coprocessors may be transparently substituted for those obtained from the default tree based calculation. The code may be run without modification on single processors or in parallel using OpenMP compiler directives on large scale, shared memory parallel machines. In comparison to the Gadget-2 code of Springel 2005, the gravitational force calculation is $approx 3.5 - 4.8$ times faster with VINE when run on 8 Itanium~2 processors in an SGI Altix, while producing nearly identical outcomes in our test problems. We present simulations of several test problems, including a merger simulation of two elliptical galaxies with 800000 particles.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا