Turbulence closure modeling with data-driven techniques: Existence of generalizable deep neural networks under the assumption of full data


Abstract in English

Generalizability of machine-learning (ML) based turbulence closures to accurately predict unseen practical flows remains an important challenge. It is well recognized that the ML neural network architecture and training protocol profoundly influence the generalizability characteristics. The objective of this work is to identify the unique challenges in finding the ML closure network hyperparameters that arise due to the inherent complexity of turbulence. Three proxy-physics turbulence surrogates of different degrees of complexity (yet significantly simpler than turbulence physics) are employed. The proxy-physics models mimic some of the key features of turbulence and provide training/testing data at low computational expense. The focus is on the following turbulence features: high dimensionality of flow physics parameter space, non-linearity effects and bifurcations in emergent behavior. A standard fully-connected neural network is used to reproduce the data of simplified proxy-physics turbulence surrogates. Lacking a rigorous procedure to find globally optimal ML neural network hyperparameters, a brute-force parameter-space sweep is performed to examine the existence of locally optimal solution. Even for this simple case, it is demonstrated that the choice of the optimal hyperparameters for a fully-connected neural network is not straightforward when it is trained with the partially available data in parameter space. Overall, specific issues to be addressed are identified, and the findings provide a realistic perspective on the utility of ML turbulence closures for practical applications.

Download