ترغب بنشر مسار تعليمي؟ اضغط هنا

Wavelet Adaptive Proper Orthogonal Decomposition for Large Scale Flow Data

188   0   0.0 ( 0 )
 نشر من قبل Philipp Krah
 تاريخ النشر 2020
والبحث باللغة English




اسأل ChatGPT حول البحث

The proper orthogonal decomposition (POD) is a powerful classical tool in fluid mechanics used, for instance, for model reduction and extraction of coherent flow features. However, its applicability to high-resolution data, as produced by three-dimensional direct numerical simulations, is limited owing to its computational complexity. Here, we propose a wavelet-based adaptive version of the POD (the wPOD), in order to overcome this limitation. The amount of data to be analyzed is reduced by compressing them using biorthogonal wavelets, yielding a sparse representation while conveniently providing control of the compression error. Numerical analysis shows how the distinct error contributions of wavelet compression and POD truncation can be balanced under certain assumptions, allowing us to efficiently process high-resolution data from three-dimensional simulations of flow problems. Using a synthetic academic test case, we compare our algorithm with the randomized singular value decomposition. Furthermore, we demonstrate the ability of our method analyzing data of a 2D wake flow and a 3D flow generated by a flapping insect computed with direct numerical simulation.



قيم البحث

اقرأ أيضاً

An extension of Proper Orthogonal Decomposition is applied to the wall layer of a turbulent channel flow (Re {tau} = 590), so that empirical eigenfunctions are defined in both space and time. Due to the statistical symmetries of the flow, the igenfun ctions are associated with individual wavenumbers and frequencies. Self-similarity of the dominant eigenfunctions, consistent with wall-attached structures transferring energy into the core region, is established. The most energetic modes are characterized by a fundamental time scale in the range 200-300 viscous wall units. The full spatio-temporal decomposition provides a natural measure of the convection velocity of structures, with a characteristic value of 12 u {tau} in the wall layer. Finally, we show that the energy budget can be split into specific contributions for each mode, which provides a closed-form expression for nonlinear effects.
In the present study, we propose a new surrogate model, called common kernel-smoothed proper orthogonal decomposition (CKSPOD), to efficiently emulate the spatiotemporal evolution of fluid flow dynamics. The proposed surrogate model integrates and ex tends recent developments in Gaussian process learning, high-fidelity simulations, projection-based model reduction, uncertainty quantification, and experimental design, rendering a systematic, multidisciplinary framework. The novelty of the CKSPOD emulation lies in the construction of a common Gram matrix, which results from the Hadamard product of Gram matrices of all observed design settings. The Gram matrix is a spatially averaged temporal correlation matrix and contains the temporal dynamics of the corresponding sampling point. The common Gram matrix synthesizes the temporal dynamics by transferring POD modes into spatial functions at each observed design setting, which remedies the phase-difference issue encountered in the kernel-smoothed POD (KSPOD) emulation, a recent fluid flow emulator proposed in Chang et al. (2020). The CKSPOD methodology is demonstrated through a model study of flow dynamics of swirl injectors with three design parameters. A total of 30 training design settings and 8 validation design settings are included. Both qualitative and quantitative results show that the CKSPOD emulation outperforms the KSPOD emulation for all validation cases, and is capable of capturing small-scale wave structures on the liquid-film surface faithfully. The turbulent kinetic energy prediction using CKSPOD reveals lower predictive uncertainty than KSPOD, thereby allowing for more accurate and precise flow predictions. The turnaround time of the CKSPOD emulation is about 5 orders of magnitude faster than the corresponding high-fidelity simulation, which enables an efficient and scalable framework for design exploration and optimization.
68 - M. K. Riahi , M. Ali , Y. Addad 2021
The present study deals with the finite element discretization of nanofluid convective transport in an enclosure with variable properties. We study the Buongiorno model, which couples the Navier-Stokes equations for the base fluid, an advective-diffu sion equation for the heat transfer, and an advection dominated nanoparticle fraction concentration subject to thermophoresis and Brownian motion forces. We develop an iterative numerical scheme that combines Newtons method (dedicated to the resolution of the momentum and energy equations) with the transport equation that governs the nanoparticles concentration in the enclosure. We show that Stream Upwind Petrov-Galerkin regularization approach is required to solve properly the ill-posed Buongiorno transport model being tackled as a variational problem under mean value constraint. Non-trivial numerical computations are reported to show the effectiveness of our proposed numerical approach in its ability to provide reasonably good agreement with the experimental results available in the literature. The numerical experiments demonstrate that by accounting for only the thermophoresis and Brownian motion forces in the concentration transport equation, the model is not able to reproduce the heat transfer impairment due to the presence of suspended nanoparticles in the base fluid. It reveals, however, the significant role that these two terms play in the vicinity of the hot and cold walls.
121 - Li-Zhi Fang , Jesus Pando 1997
We present a detailed review of large-scale structure (LSS) study using the discrete wavelet transform (DWT). After describing how one constructs a wavelet decomposition we show how this bases can be used as a complete statistical discription of LSS. Among the topics studied are the the DWT estimation of the probability distribution function; the reconstruction of the power spectrum; the regularization of complex geometry in observational samples; cluster identification; extraction and identification of coherent structures; scale-decomposition of non-Gaussianity, such as spectra of skewnes and kurtosis and scale-scale correlations. These methods are applied to both observational and simulated samples of the QSO Lyman-alpha forests. It is clearly demonstrated that the statistical measures developed using the DWT are needed to distinguish between competing models of structure formation. The DWT also reveals physical features in these distributions not detected before. We conclude with a look towards the future of the use of the DWT in LSS.
Tensor decomposition is a well-known tool for multiway data analysis. This work proposes using stochastic gradients for efficient generalized canonical polyadic (GCP) tensor decomposition of large-scale tensors. GCP tensor decomposition is a recently proposed version of tensor decomposition that allows for a variety of loss functions such as Bernoulli loss for binary data or Huber loss for robust estimation. The stochastic gradient is formed from randomly sampled elements of the tensor and is efficient because it can be computed using the sparse matricized-tensor-times-Khatri-Rao product (MTTKRP) tensor kernel. For dense tensors, we simply use uniform sampling. For sparse tensors, we propose two types of stratified sampling that give precedence to sampling nonzeros. Numerical results demonstrate the advantages of the proposed approach and its scalability to large-scale problems.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا