Do you want to publish a course? Click here

A fast moving least squares approximation with adaptive Lagrangian mesh refinement for large scale immersed boundary simulations

61   0   0.0 ( 0 )
 Added by Vamsi Spandan
 Publication date 2018
  fields Physics
and research's language is English




Ask ChatGPT about the research

In this paper we propose and test the validity of simple and easy-to-implement algorithms within the immersed boundary framework geared towards large scale simulations involving thousands of deformable bodies in highly turbulent flows. First, we introduce a fast moving least squares (fast-MLS) approximation technique with which we speed up the process of building transfer functions during the simulations which leads to considerable reductions in computational time. We compare the accuracy of the fast-MLS against the exact moving least squares (MLS) for the standard problem of uniform flow over a sphere. In order to overcome the restrictions set by the resolution coupling of the Lagrangian and Eulerian meshes in this particular immersed boundary method, we present an adaptive Lagrangian mesh refinement procedure that is capable of drastically reducing the number of required nodes of the basic Lagrangian mesh when the immersed boundaries can move and deform. Finally, a coarse-grained collision detection algorithm is presented which can detect collision events between several Lagrangian markers residing on separate complex geometries with minimal computational overhead.



rate research

Read More

The physical characteristics and evolution of a large-scale helium plume are examined through a series of numerical simulations with increasing physical resolution using adaptive mesh refinement (AMR). The five simulations each model a 1~m diameter circular helium plume exiting into a (4~m)$^3$ domain, and differ solely with respect to the smallest scales resolved using the AMR, spanning resolutions from 15.6~mm down to 0.976~mm. As the physical resolution becomes finer, the helium-air shear layer and subsequent Kelvin-Helmholtz instability are better resolved, leading to a shift in the observed plume structure and dynamics. In particular, a critical resolution is found between 3.91~mm and 1.95~mm, below which the mean statistics and frequency content of the plume are altered by the development of a Rayleigh-Taylor instability near the centerline in close proximity to the base of the plume. This shift corresponds to a plume puffing frequency that is slightly higher than would be predicted using empirical relationships developed for buoyant jets. Ultimately, the high-fidelity simulations performed here are intended as a new validation dataset for the development of subgrid-scale models used in large eddy simulations of real-world buoyancy-driven flows.
61 - Barak Sober , David Levin 2016
In order to avoid the curse of dimensionality, frequently encountered in Big Data analysis, there was a vast development in the field of linear and nonlinear dimension reduction techniques in recent years. These techniques (sometimes referred to as manifold learning) assume that the scattered input data is lying on a lower dimensional manifold, thus the high dimensionality problem can be overcome by learning the lower dimensionality behavior. However, in real life applications, data is often very noisy. In this work, we propose a method to approximate $mathcal{M}$ a $d$-dimensional $C^{m+1}$ smooth submanifold of $mathbb{R}^n$ ($d ll n$) based upon noisy scattered data points (i.e., a data cloud). We assume that the data points are located near the lower dimensional manifold and suggest a non-linear moving least-squares projection on an approximating $d$-dimensional manifold. Under some mild assumptions, the resulting approximant is shown to be infinitely smooth and of high approximation order (i.e., $O(h^{m+1})$, where $h$ is the fill distance and $m$ is the degree of the local polynomial approximation). The method presented here assumes no analytic knowledge of the approximated manifold and the approximation algorithm is linear in the large dimension $n$. Furthermore, the approximating manifold can serve as a framework to perform operations directly on the high dimensional data in a computationally efficient manner. This way, the preparatory step of dimension reduction, which induces distortions to the data, can be avoided altogether.
Edge bundling methods can effectively alleviate visual clutter and reveal high-level graph structures in large graph visualization. Researchers have devoted significant efforts to improve edge bundling according to different metrics. As the edge bundling family evolve rapidly, the quality of edge bundles receives increasing attention in the literature accordingly. In this paper, we present MLSEB, a novel method to generate edge bundles based on moving least squares (MLS) approximation. In comparison with previous edge bundling methods, we argue that our MLSEB approach can generate better results based on a quantitative metric of quality, and also ensure scalability and the efficiency for visualizing large graphs.
The large time and length scales and, not least, the vast number of particles involved in industrial-scale simulations inflate the computational costs of the Discrete Element Method (DEM) excessively. Coarse grain models can help to lower the computational demands significantly. However, for effects that intrinsically depend on particle size, coarse grain models fail to correctly predict the behaviour of the granular system. To solve this problem we have developed a new technique based on the efficient combination of fine-scale and coarse grain DEM models. The method is designed to capture the details of the granular system in spatially confined sub-regions while keeping the computational benefits of the coarse grain model where a lower resolution is sufficient. To this end, our method establishes two-way coupling between resolved and coarse grain parts of the system by volumetric passing of boundary conditions. Even more, multiple levels of coarse-graining may be combined to achieve an optimal balance between accuracy and speedup. This approach enables us to reach large time and length scales while retaining specifics of crucial regions. Furthermore, the presented model can be extended to coupled CFD-DEM simulations, where the resolution of the CFD mesh may be changed adaptively as well.
We present an algorithm for approximating a function defined over a $d$-dimensional manifold utilizing only noisy function values at locations sampled from the manifold with noise. To produce the approximation we do not require any knowledge regarding the manifold other than its dimension $d$. We use the Manifold Moving Least-Squares approach of (Sober and Levin 2016) to reconstruct the atlas of charts and the approximation is built on-top of those charts. The resulting approximant is shown to be a function defined over a neighborhood of a manifold, approximating the originally sampled manifold. In other words, given a new point, located near the manifold, the approximation can be evaluated directly on that point. We prove that our construction yields a smooth function, and in case of noiseless samples the approximation order is $mathcal{O}(h^{m+1})$, where $h$ is a local density of sample parameter (i.e., the fill distance) and $m$ is the degree of a local polynomial approximation, used in our algorithm. In addition, the proposed algorithm has linear time complexity with respect to the ambient-spaces dimension. Thus, we are able to avoid the computational complexity, commonly encountered in high dimensional approximations, without having to perform non-linear dimension reduction, which inevitably introduces distortions to the geometry of the data. Additionaly, we show numerical experiments that the proposed approach compares favorably to statistical approaches for regression over manifolds and show its potential.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا