No Arabic abstract
The Wagner function in classical unsteady aerodynamic theory represents the response in lift on an airfoil that is subject to a sudden change in conditions. While it plays a fundamental role in the development and application of unsteady aerodynamic methods, explicit expressions for this function are difficult to obtain. The Wagner function requires computation of an inverse Laplace transform, or similar inversion, of a non-rational function in the Laplace domain, which is closely related to the Theodorsen function. This has led to numerous proposed approximations to the Wagner function, which facilitate convenient and rapid computations. While these approximations can be sufficient for many purposes, their behavior is often noticeably different from the true Wagner function, especially for long-time asymptotic behavior. In particular, while many approximations have small maximum absolute error across all times, the relative error of the asymptotic behavior can be substantial. As well as documenting this error, we propose an alternative approximation methodology that is accurate for all times, for a variety of accuracy measures. This methodology casts the Wagner function as the solution of a nonlinear scalar ordinary differential equation, which is identified using a variant of the sparse identification of nonlinear dynamics (SINDy) algorithm. We show that this approach can give accurate approximations using either first- or second-order differential equations. We additionally show that this method can be applied to model the analogous lift response for a more realistic aerodynamic system, featuring a finite thickness airfoil and a nonplanar wake.
We perform a sparse identification of nonlinear dynamics (SINDy) for low-dimensionalized complex flow phenomena. We first apply the SINDy with two regression methods, the thresholded least square algorithm (TLSA) and the adaptive Lasso (Alasso) which show reasonable ability with a wide range of sparsity constant in our preliminary tests, to a two-dimensional single cylinder wake at $Re_D=100$, its transient process, and a wake of two-parallel cylinders, as examples of high-dimensional fluid data. To handle these high dimensional data with SINDy whose library matrix is suitable for low-dimensional variable combinations, a convolutional neural network-based autoencoder (CNN-AE) is utilized. The CNN-AE is employed to map a high-dimensional dynamics into a low-dimensional latent space. The SINDy then seeks a governing equation of the mapped low-dimensional latent vector. Temporal evolution of high-dimensional dynamics can be provided by combining the predicted latent vector by SINDy with the CNN decoder which can remap the low-dimensional latent vector to the original dimension. The SINDy can provide a stable solution as the governing equation of the latent dynamics and the CNN-SINDy based modeling can reproduce high-dimensional flow fields successfully, although more terms are required to represent the transient flow and the two-parallel cylinder wake than the periodic shedding. A nine-equation turbulent shear flow model is finally considered to examine the applicability of SINDy to turbulence, although without using CNN-AE. The present results suggest that the proposed scheme with an appropriate parameter choice enables us to analyze high-dimensional nonlinear dynamics with interpretable low-dimensional manifolds.
We reconstruct the velocity field of incompressible flows given a finite set of measurements. For the spatial approximation, we introduce the Sparse Fourier divergence-free (SFdf) approximation based on a discrete $L^2$ projection. Within this physics-informed type of statistical learning framework, we adaptively build a sparse set of Fourier basis functions with corresponding coefficients by solving a sequence of minimization problems where the set of basis functions is augmented greedily at each optimization problem. We regularize our minimization problems with the seminorm of the fractional Sobolev space in a Tikhonov fashion. In the Fourier setting, the incompressibility (divergence-free) constraint becomes a finite set of linear algebraic equations. We couple our spatial approximation with the truncated Singular Value Decomposition (SVD) of the flow measurements for temporal compression. Our computational framework thus combines supervised and unsupervised learning techniques. We assess the capabilities of our method in various numerical examples arising in fluid mechanics.
We report far-field approximations to the derivatives and integrals of the Greens function for the Ffowcs Williams and Hawkings equation in the frequency domain. The approximations are based on the far-field asymptotic of the Greens function. The details of the derivations of the proposed formulations are provided.
A generalised quasilinear (GQL) approximation (Marston emph{et al.}, emph{Phys. Rev. Lett.}, vol. 116, 104502, 2016) is applied to turbulent channel flow at $Re_tau simeq 1700$ ($Re_tau$ is the friction Reynolds number), with emphasis on the energy transfer in the streamwise wavenumber space. The flow is decomposed into low and high streamwise wavenumber groups, the former of which is solved by considering the full nonlinear equations whereas the latter is obtained from the linearised equations around the former. The performance of the GQL approximation is subsequently compared with that of a QL model (Thomas emph{et al.}, emph{Phys. Fluids.}, vol. 26, no. 10, 105112, 2014), in which the low-wavenumber group only contains zero streamwise wavenumber. It is found that the QL model exhibits a considerably reduced multi-scale behaviour at the given moderately high Reynolds number. This is improved significantly by the GQL approximation which incorporates only a few more streamwise Fourier modes into the low-wavenumber group, and it reasonably well recovers the distance-from-the-wall scaling in the turbulence statistics and spectra. Finally, it is proposed that the energy transfer from the low to the high-wavenumber group in the GQL approximation, referred to as the `scattering mechanism, depends on the neutrally stable leading Lyapunov spectrum of the linearised equations for the high wavenumber group. In particular, it is shown that if the threshold wavenumber distinguishing the two groups is sufficiently high, the scattering mechanism can completely be absent due to the linear nature of the equations for the high-wavenumber group.
The idea of unfolding iterative algorithms as deep neural networks has been widely applied in solving sparse coding problems, providing both solid theoretical analysis in convergence rate and superior empirical performance. However, for sparse nonlinear regression problems, a similar idea is rarely exploited due to the complexity of nonlinearity. In this work, we bridge this gap by introducing the Nonlinear Learned Iterative Shrinkage Thresholding Algorithm (NLISTA), which can attain a linear convergence under suitable conditions. Experiments on synthetic data corroborate our theoretical results and show our method outperforms state-of-the-art methods.