ترغب بنشر مسار تعليمي؟ اضغط هنا

Adaptive Mesh Refinement Computation of Solidification Microstructures using Dynamic Data Structures

111   0   0.0 ( 0 )
 نشر من قبل Nikolas Provatas
 تاريخ النشر 1998
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

We study the evolution of solidification microstructures using a phase-field model computed on an adaptive, finite element grid. We discuss the details of our algorithm and show that it greatly reduces the computational cost of solving the phase-field model at low undercooling. In particular we show that the computational complexity of solving any phase-boundary problem scales with the interface arclength when using an adapting mesh. Moreover, the use of dynamic data structures allows us to simulate system sizes corresponding to experimental conditions, which would otherwise require lattices greater that $2^{17}times 2^{17}$ elements. We examine the convergence properties of our algorithm. We also present two dimensional, time-dependent calculations of dendritic evolution, with and without surface tension anisotropy. We benchmark our results for dendritic growth with microscopic solvability theory, finding them to be in good agreement with theory for high undercoolings. At low undercooling, however, we obtain higher values of velocity than solvability theory at low undercooling, where transients dominate, in accord with a heuristic criterion which we derive.



قيم البحث

اقرأ أيضاً

125 - Nikolas Provatas 1997
We study dendritic microstructure evolution using an adaptive grid, finite element method applied to a phase-field model. The computational complexity of our algorithm, per unit time, scales linearly with system size, rather than the quadratic variat ion given by standard uniform mesh schemes. Time-dependent calculations in two dimensions are in good agreement with the predictions of solvability theory, and can be extended to three dimensions and small undercoolings
Wildland fires are complex multi-physics problems that span wide spatial scale ranges. Capturing this complexity in computationally affordable numerical simulations for process studies and outer-loop techniques (e.g., optimization and uncertainty qua ntification) is a fundamental challenge in reacting flow research. Further complications arise for propagating fires where a priori knowledge of the fire spread rate and direction is typically not available. In such cases, static mesh refinement at all possible fire locations is a computationally inefficient approach to bridging the wide range of spatial scales relevant to wildland fire behavior. In the present study, we address this challenge by incorporating adaptive mesh refinement (AMR) in fireFoam, an OpenFOAM solver for simulations of complex fire phenomena. The AMR functionality in the extended solver, called wildFireFoam, allows us to dynamically track regions of interest and to avoid inefficient over-resolution of areas far from a propagating flame. We demonstrate the AMR capability for fire spread on vertical panels and for large-scale fire propagation on a variable-slope surface that is representative of real topography. We show that the AMR solver reproduces results obtained using much larger statically refined meshes, at a substantially reduced computational cost.
Large-scale finite element simulations of complex physical systems governed by partial differential equations crucially depend on adaptive mesh refinement (AMR) to allocate computational budget to regions where higher resolution is required. Existing scalable AMR methods make heuristic refinement decisions based on instantaneous error estimation and thus do not aim for long-term optimality over an entire simulation. We propose a novel formulation of AMR as a Markov decision process and apply deep reinforcement learning (RL) to train refinement policies directly from simulation. AMR poses a new problem for RL in that both the state dimension and available action set changes at every step, which we solve by proposing new policy architectures with differing generality and inductive bias. The model sizes of these policy architectures are independent of the mesh size and hence scale to arbitrarily large and complex simulations. We demonstrate in comprehensive experiments on static function estimation and the advection of different fields that RL policies can be competitive with a widely-used error estimator and generalize to larger, more complex, and unseen test problems.
We have explored the evolution of gas distributions from cosmological simulations carried out using the RAMSES adaptive mesh refinement (AMR) code, to explore the effects of resolution on cosmological hydrodynamical simulations. It is vital to unders tand the effect of both the resolution of initial conditions and the final resolution of the simulation. Lower initial resolution simulations tend to produce smaller numbers of low mass structures. This will strongly affect the assembly history of objects, and has the same effect of simulating different cosmologies. The resolution of initial conditions is an important factor in simulations, even with a fixed maximum spatial resolution. The power spectrum of gas in simulations using AMR diverges strongly from the fixed grid approach - with more power on small scales in the AMR simulations - even at fixed physical resolution and also produces offsets in the star formation at specific epochs. This is because before certain times the upper grid levels are held back to maintain approximately fixed physical resolution, and to mimic the natural evolution of dark matter only simulations. Although the impact of hold back falls with increasing spatial and initial-condition resolutions, the offsets in the star formation remain down to a spatial resolution of 1 kpc. These offsets are of order of 10-20%, which is below the uncertainty in the implemented physics but are expected to affect the detailed properties of galaxies. We have implemented a new grid-hold-back approach to minimize the impact of hold back on the star formation rate.
In this work, we introduce GRChombo: a new numerical relativity code which incorporates full adaptive mesh refinement (AMR) using block structured Berger-Rigoutsos grid generation. The code supports non-trivial many-boxes-in-many-boxes mesh hierarchi es and massive parallelism through the Message Passing Interface (MPI). GRChombo evolves the Einstein equation using the standard BSSN formalism, with an option to turn on CCZ4 constraint damping if required. The AMR capability permits the study of a range of new physics which has previously been computationally infeasible in a full 3+1 setting, whilst also significantly simplifying the process of setting up the mesh for these problems. We show that GRChombo can stably and accurately evolve standard spacetimes such as binary black hole mergers and scalar collapses into black holes, demonstrate the performance characteristics of our code, and discuss various physics problems which stand to benefit from the AMR technique.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا