No Arabic abstract
Despite a cost-effective option in practical engineering, Reynolds-averaged Navier-Stokes simulations are facing the ever-growing demand for more accurate turbulence models. Recently, emerging machine learning techniques are making promising impact in turbulence modeling, but in their infancy for widespread industrial adoption. Towards this end, this work proposes a universal, inherently interpretable machine learning framework of turbulence modeling, which mainly consists of two parallel machine-learning-based modules to respectively infer the integrity basis and closure coefficients. At every phase of the model development, both data representing the evolution dynamics of turbulence and domain-knowledge representing prior physical considerations are properly fed and reasonably converted into modeling knowledge. Thus, the developed model is both data- and knowledge-driven. Specifically, a version with pre-constrained integrity basis is provided to demonstrate detailedly how to integrate domain-knowledge, how to design a fair and robust training strategy, and how to evaluate the data-driven model. Plain neural network and residual neural network as the building blocks in each module are compared. Emphases are made on three-fold: (i) a compact input feature parameterizing the newly-proposed turbulent timescale is introduced to release nonunique mappings between conventional input arguments and output Reynolds stress; (ii) the realizability limiter is developed to overcome under-constraint of modeled stress; and (iii) constraints of fairness and noisy-sensitivity are first included in the training procedure. In such endeavors, an invariant, realizable, unbiased and robust data-driven turbulence model is achieved, and does gain good generalization across channel flows at different Reynolds numbers and duct flows with various aspect ratios.
Thermal plumes are the energy containing eddy motions that carry heat and momentum in a convective boundary layer. The detailed understanding of their structure is of fundamental interest for a range of applications, from wall-bounded engineering flows to quantifying surface-atmosphere flux exchanges. We address the aspect of Reynolds stress anisotropy associated with the intermittent nature of heat transport in thermal plumes by performing an invariant analysis of the Reynolds stress tensor in an unstable atmospheric surface layer flow, using a field-experimental dataset. Given the intermittent and asymmetric nature of the turbulent heat flux, we formulate this problem in an event-based framework. In this approach, we provide structural descriptions of warm-updraft and cold-downdraft events and investigate the degree of isotropy of the Reynolds stress tensor within these events of different sizes. We discover that only a subset of these events are associated with the least anisotropic turbulence in highly-convective conditions. Additionally, intermittent large heat flux events are found to contribute substantially to turbulence anisotropy under unstable stratification. Moreover, we find that the sizes related to the maximum value of the degree of isotropy do not correspond to the peak positions of the heat flux distributions. This is because, the vertical velocity fluctuations pertaining to the sizes associated with the maximum heat flux, transport significant amount of streamwise momentum. A preliminary investigation shows that the sizes of the least anisotropic events probably scale with a mixed-length scale ($z^{0.5}lambda^{0.5}$, where $z$ is the measurement height and $lambda$ is the large-eddy length scale).
A new scaling is derived that yields a Reynolds number independent profile for all components of the Reynolds stress in the near-wall region of wall bounded flows. The scaling demonstrates the important role played by the wall shear stress fluctuations and how the large eddies determine the Reynolds number dependence of the near-wall turbulence behavior.
Despite their well-known limitations, RANS models remain the most commonly employed tool for modeling turbulent flows in engineering practice. RANS models are predicated on the solution of the RANS equations, but these equations involve an unclosed term, the Reynolds stress tensor, which must be modeled. The Reynolds stress tensor is often modeled as an algebraic function of mean flow field variables and turbulence variables. This introduces a discrepancy between the Reynolds stress tensor predicted by the model and the exact Reynolds stress tensor. This discrepancy can result in inaccurate mean flow field predictions. In this paper, we introduce a data-informed approach for arriving at Reynolds stress models with improved predictive performance. Our approach relies on learning the components of the Reynolds stress discrepancy tensor associated with a given Reynolds stress model in the mean strain-rate tensor eigenframe. These components are typically smooth and hence simple to learn using state-of-the-art machine learning strategies and regression techniques. Our approach automatically yields Reynolds stress models that are symmetric, and it yields Reynolds stress models that are both Galilean and frame invariant provided the inputs are themselves Galilean and frame invariant. To arrive at computable models of the discrepancy tensor, we employ feed-forward neural networks and an input space spanning the integrity basis of the mean strain-rate tensor, the mean rotation-rate tensor, the mean pressure gradient, and the turbulent kinetic energy gradient, and we introduce a framework for dimensional reduction of the input space to further reduce computational cost. Numerical results illustrate the effectiveness of the proposed approach for data-informed Reynolds stress closure for a suite of turbulent flow problems of increasing complexity.
Reynolds-averaged Navier-Stokes (RANS) equations are presently one of the most popular models for simulating turbulence. Performing RANS simulation requires additional modeling for the anisotropic Reynolds stress tensor, but traditional Reynolds stress closure models lead to only partially reliable predictions. Recently, data-driven turbulence models for the Reynolds anisotropy tensor involving novel machine learning techniques have garnered considerable attention and have been rapidly developed. Focusing on modeling the Reynolds stress closure for the specific case of turbulent channel flow, this paper proposes three modifications to a standard neural network to account for the no-slip boundary condition of the anisotropy tensor, the Reynolds number dependence, and spatial non-locality. The modified models are shown to provide increased predicative accuracy compared to the standard neural network when they are trained and tested on channel flow at different Reynolds numbers. The best performance is yielded by the model combining the boundary condition enforcement and Reynolds number injection. This model also outperforms the Tensor Basis Neural Network (Ling et al., 2016) on the turbulent channel flow dataset.
A new scaling is derived that yields a Reynolds number independent profile for all components of the Reynolds stress in the near-wall region of wall bounded flows, including channel, pipe and boundary layer flows. The scaling demonstrates the important role played by the wall shear stress fluctuations and how the large eddies determine the Reynolds number dependence of the near-wall turbulence behavior.