ترغب بنشر مسار تعليمي؟ اضغط هنا

Efficient Mesh Optimization Using the Gradient Flow of the Mean Volume

102   0   0.0 ( 0 )
 نشر من قبل Dimitris Vartziotis
 تاريخ النشر 2013
  مجال البحث
والبحث باللغة English




اسأل ChatGPT حول البحث

The signed volume function for polyhedra can be generalized to a mean volume function for volume elements by averaging over the triangulations of the underlying polyhedron. If we consider these up to translation and scaling, the resulting quotient space is diffeomorphic to a sphere. The mean volume function restricted to this sphere is a quality measure for volume elements. We show that, the gradient ascent of this map regularizes the building blocks of hybrid meshes consisting of tetrahedra, hexahedra, prisms, pyramids and octahedra, that is, the optimization process converges to regular polyhedra. We show that the (normalized) gradient flow of the mean volume yields a fast and efficient optimization scheme for the finite element method known as the geometric element transformation method (GETMe). Furthermore, we shed some light on the dynamics of this method and the resulting smoothing procedure both theoretically and experimentally.



قيم البحث

اقرأ أيضاً

Training sparse networks to converge to the same performance as dense neural architectures has proven to be elusive. Recent work suggests that initialization is the key. However, while this direction of research has had some success, focusing on init ialization alone appears to be inadequate. In this paper, we take a broader view of training sparse networks and consider the role of regularization, optimization, and architecture choices on sparse models. We propose a simple experimental framework, Same Capacity Sparse vs Dense Comparison (SC-SDC), that allows for a fair comparison of sparse and dense networks. Furthermore, we propose a new measure of gradient flow, Effective Gradient Flow (EGF), that better correlates to performance in sparse networks. Using top-line metrics, SC-SDC and EGF, we show that default choices of optimizers, activation functions and regularizers used for dense networks can disadvantage sparse networks. Based upon these findings, we show that gradient flow in sparse networks can be improved by reconsidering aspects of the architecture design and the training regime. Our work suggests that initialization is only one piece of the puzzle and taking a wider view of tailoring optimization to sparse networks yields promising results.
This paper concerns closed hypersurfaces of dimension $n(geq 2)$ in the hyperbolic space ${mathbb{H}}_{kappa}^{n+1}$ of constant sectional curvature $kappa$ evolving in direction of its normal vector, where the speed is given by a power $beta (geq 1/ m)$ of the $m$th mean curvature plus a volume preserving term, including the case of powers of the mean curvature and of the $mbox{Gauss}$ curvature. The main result is that if the initial hypersurface satisfies that the ratio of the biggest and smallest principal curvature is close enough to 1 everywhere, depending only on $n$, $m$, $beta$ and $kappa$, then under the flow this is maintained, there exists a unique, smooth solution of the flow for all times, and the evolving hypersurfaces exponentially converge to a geodesic sphere of ${mathbb{H}}_{kappa}^{n+1}$, enclosing the same volume as the initial hypersurface.
159 - Ben Andrews , Yong Wei 2017
We consider the flow of closed convex hypersurfaces in Euclidean space $mathbb{R}^{n+1}$ with speed given by a power of the $k$-th mean curvature $E_k$ plus a global term chosen to impose a constraint involving the enclosed volume $V_{n+1}$ and the m ixed volume $V_{n+1-k}$ of the evolving hypersurface. We prove that if the initial hypersurface is strictly convex, then the solution of the flow exists for all time and converges to a round sphere smoothly. No curvature pinching assumption is required on the initial hypersurface.
Wildland fires are complex multi-physics problems that span wide spatial scale ranges. Capturing this complexity in computationally affordable numerical simulations for process studies and outer-loop techniques (e.g., optimization and uncertainty qua ntification) is a fundamental challenge in reacting flow research. Further complications arise for propagating fires where a priori knowledge of the fire spread rate and direction is typically not available. In such cases, static mesh refinement at all possible fire locations is a computationally inefficient approach to bridging the wide range of spatial scales relevant to wildland fire behavior. In the present study, we address this challenge by incorporating adaptive mesh refinement (AMR) in fireFoam, an OpenFOAM solver for simulations of complex fire phenomena. The AMR functionality in the extended solver, called wildFireFoam, allows us to dynamically track regions of interest and to avoid inefficient over-resolution of areas far from a propagating flame. We demonstrate the AMR capability for fire spread on vertical panels and for large-scale fire propagation on a variable-slope surface that is representative of real topography. We show that the AMR solver reproduces results obtained using much larger statically refined meshes, at a substantially reduced computational cost.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا