Do you want to publish a course? Click here

Identification of hydrodynamic instability by convolutional neural networks

101   0   0.0 ( 0 )
 Added by Wuyue Yang
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

The onset of hydrodynamic instabilities is of great importance in both industry and daily life, due to the dramatic mechanical and thermodynamic changes for different types of flow motions. In this paper, modern machine learning techniques, especially the convolutional neural networks (CNN), are applied to identify the transition between different flow motions raised by hydrodynamic instability, as well as critical non-dimensionalized parameters for characterizing this transit. CNN not only correctly predicts the critical transition values for both Taylor-Couette (TC) flow and Rayleigh- Benard (RB) convection under various setups and conditions, but also shows an outstanding performance on robustness and noise-tolerance. In addition, key spatial features used for classifying different flow patterns are revealed by the principal component analysis.



rate research

Read More

We propose contextual convolution (CoConv) for visual recognition. CoConv is a direct replacement of the standard convolution, which is the core component of convolutional neural networks. CoConv is implicitly equipped with the capability of incorporating contextual information while maintaining a similar number of parameters and computational cost compared to the standard convolution. CoConv is inspired by neuroscience studies indicating that (i) neurons, even from the primary visual cortex (V1 area), are involved in detection of contextual cues and that (ii) the activity of a visual neuron can be influenced by the stimuli placed entirely outside of its theoretical receptive field. On the one hand, we integrate CoConv in the widely-used residual networks and show improved recognition performance over baselines on the core tasks and benchmarks for visual recognition, namely image classification on the ImageNet data set and object detection on the MS COCO data set. On the other hand, we introduce CoConv in the generator of a state-of-the-art Generative Adversarial Network, showing improved generative results on CIFAR-10 and CelebA. Our code is available at https://github.com/iduta/coconv.
Machine learning algorithms have been available since the 1990s, but it is much more recently that they have come into use also in the physical sciences. While these algorithms have already proven to be useful in uncovering new properties of materials and in simplifying experimental protocols, their usage in liquid crystals research is still limited. This is surprising because optical imaging techniques are often applied in this line of research, and it is precisely with images that machine learning algorithms have achieved major breakthroughs in recent years. Here we use convolutional neural networks to probe several properties of liquid crystals directly from their optical images and without using manual feature engineering. By optimizing simple architectures, we find that convolutional neural networks can predict physical properties of liquid crystals with exceptional accuracy. We show that these deep neural networks identify liquid crystal phases and predict the order parameter of simulated nematic liquid crystals almost perfectly. We also show that convolutional neural networks identify the pitch length of simulated samples of cholesteric liquid crystals and the sample temperature of an experimental liquid crystal with very high precision.
We present a new nonlinear mode decomposition method to visualize the decomposed flow fields, named the mode decomposing convolutional neural network autoencoder (MD-CNN-AE). The proposed method is applied to a flow around a circular cylinder at $Re_D=100$ as a test case. The flow attributes are mapped into two modes in the latent space and then these two modes are visualized in the physical space. Because the MD-CNN-AEs with nonlinear activation functions show lower reconstruction errors than the proper orthogonal decomposition (POD), the nonlinearity contained in the activation function is considered the key to improve the capability of the model. It is found by applying POD to each field decomposed using the MD-CNN-AE with hyperbolic tangent activation that a single nonlinear MD-CNN-AE mode contains multiple orthogonal bases, in contrast to the linear methods, i.e., POD and the MD-CNN-AE with linear activation. We further assess the proposed MD-CNN-AE by applying it to a transient process of a circular cylinder wake in order to examine its capability for flows containing high-order spatial modes. The present results suggest a great potential for the nonlinear MD-CNN-AE to be used for feature extraction of flow fields in lower dimension than POD, while retaining interpretable relationships with the conventional POD modes.
Convolutional Neural Networks (CNNs) have been proven to be extremely successful at solving computer vision tasks. State-of-the-art methods favor such deep network architectures for its accuracy performance, with the cost of having massive number of parameters and high weights redundancy. Previous works have studied how to prune such CNNs weights. In this paper, we go to another extreme and analyze the performance of a network stacked with a single convolution kernel across layers, as well as other weights sharing techniques. We name it Deep Anchored Convolutional Neural Network (DACNN). Sharing the same kernel weights across layers allows to reduce the model size tremendously, more precisely, the network is compressed in memory by a factor of L, where L is the desired depth of the network, disregarding the fully connected layer for prediction. The number of parameters in DACNN barely increases as the network grows deeper, which allows us to build deep DACNNs without any concern about memory costs. We also introduce a partial shared weights network (DACNN-mix) as well as an easy-plug-in module, coined regulators, to boost the performance of our architecture. We validated our idea on 3 datasets: CIFAR-10, CIFAR-100 and SVHN. Our results show that we can save massive amounts of memory with our model, while maintaining a high accuracy performance.
The Fermionic Neural Network (FermiNet) is a recently-developed neural network architecture that can be used as a wavefunction Ansatz for many-electron systems, and has already demonstrated high accuracy on small systems. Here we present several improvements to the FermiNet that allow us to set new records for speed and accuracy on challenging systems. We find that increasing the size of the network is sufficient to reach chemical accuracy on atoms as large as argon. Through a combination of implementing FermiNet in JAX and simplifying several parts of the network, we are able to reduce the number of GPU hours needed to train the FermiNet on large systems by an order of magnitude. This enables us to run the FermiNet on the challenging transition of bicyclobutane to butadiene and compare against the PauliNet on the automerization of cyclobutadiene, and we achieve results near the state of the art for both.

suggested questions

comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا