No Arabic abstract
Background: Floods are the most common natural disaster in the world, affecting the lives of hundreds of millions. Flood forecasting is therefore a vitally important endeavor, typically achieved using physical water flow simulations, which rely on accurate terrain elevation maps. However, such simulations, based on solving partial differential equations, are computationally prohibitive on a large scale. This scalability issue is commonly alleviated using a coarse grid representation of the elevation map, though this representation may distort crucial terrain details, leading to significant inaccuracies in the simulation. Contributions: We train a deep neural network to perform physics-informed downsampling of the terrain map: we optimize the coarse grid representation of the terrain maps, so that the flood prediction will match the fine grid solution. For the learning process to succeed, we configure a dataset specifically for this task. We demonstrate that with this method, it is possible to achieve a significant reduction in computational cost, while maintaining an accurate solution. A reference implementation accompanies the paper as well as documentation and code for dataset reproduction.
Data modeling and reduction for in situ is important. Feature-driven methods for in situ data analysis and reduction are a priority for future exascale machines as there are currently very few such methods. We investigate a deep-learning based workflow that targets in situ data processing using autoencoders. We propose a Residual Autoencoder integrated Residual in Residual Dense Block (RRDB) to obtain better performance. Our proposed framework compressed our test data into 66 KB from 2.1 MB per 3D volume timestep.
When humans observe a physical system, they can easily locate objects, understand their interactions, and anticipate future behavior, even in settings with complicated and previously unseen interactions. For computers, however, learning such models from videos in an unsupervised fashion is an unsolved research problem. In this paper, we present STOVE, a novel state-space model for videos, which explicitly reasons about objects and their positions, velocities, and interactions. It is constructed by combining an image model and a dynamics model in compositional manner and improves on previous work by reusing the dynamics model for inference, accelerating and regularizing training. STOVE predicts videos with convincing physical behavior over hundreds of timesteps, outperforms previous unsupervised models, and even approaches the performance of supervised baselines. We further demonstrate the strength of our model as a simulator for sample efficient model-based control in a task with heavily interacting objects.
Recent works have explored the potential of machine learning as data-driven turbulence closures for RANS and LES techniques. Beyond these advances, the high expressivity and agility of physics-informed neural networks (PINNs) make them promising candidates for full fluid flow PDE modeling. An important question is whether this new paradigm, exempt from the traditional notion of discretization of the underlying operators very much connected to the flow scales resolution, is capable of sustaining high levels of turbulence characterized by multi-scale features? We investigate the use of PINNs surrogate modeling for turbulent Rayleigh-B{e}nard (RB) convection flows in rough and smooth rectangular cavities, mainly relying on DNS temperature data from the fluid bulk. We carefully quantify the computational requirements under which the formulation is capable of accurately recovering the flow hidden quantities. We then propose a new padding technique to distribute some of the scattered coordinates-at which PDE residuals are minimized-around the region of labeled data acquisition. We show how it comes to play as a regularization close to the training boundaries which are zones of poor accuracy for standard PINNs and results in a noticeable global accuracy improvement at iso-budget. Finally, we propose for the first time to relax the incompressibility condition in such a way that it drastically benefits the optimization search and results in a much improved convergence of the composite loss function. The RB results obtained at high Rayleigh number Ra = 2 $bullet$ 10 9 are particularly impressive: the predictive accuracy of the surrogate over the entire half a billion DNS coordinates yields errors for all flow variables ranging between [0.3% -- 4%] in the relative L 2 norm, with a training relying only on 1.6% of the DNS data points.
The rapid developments in advanced sensing and imaging bring about a data-rich environment, facilitating the effective modeling, monitoring, and control of complex systems. For example, the body-sensor network captures multi-channel information pertinent to the electrical activity of the heart (i.e., electrocardiograms (ECG)), which enables medical scientists to monitor and detect abnormal cardiac conditions. However, the high-dimensional sensing data are generally complexly structured and realizing the full data potential depends to a great extent on advanced analytical and predictive methods. This paper presents a physics-constrained deep learning (P-DL) framework for high-dimensional inverse ECG modeling. This method integrates the physical laws of the complex system with the advanced deep learning infrastructure for effective prediction of the system dynamics. The proposed P-DL approach is implemented to solve the inverse ECG model and predict the time-varying distribution of electric potentials in the heart from the ECG data measured by the body-surface sensor network. Experimental results show that the proposed P-DL method significantly outperforms existing methods that are commonly used in current practice.
Learning graph generative models is a challenging task for deep learning and has wide applicability to a range of domains like chemistry, biology and social science. However current deep neural methods suffer from limited scalability: for a graph with $n$ nodes and $m$ edges, existing deep neural methods require $Omega(n^2)$ complexity by building up the adjacency matrix. On the other hand, many real world graphs are actually sparse in the sense that $mll n^2$. Based on this, we develop a novel autoregressive model, named BiGG, that utilizes this sparsity to avoid generating the full adjacency matrix, and importantly reduces the graph generation time complexity to $O((n + m)log n)$. Furthermore, during training this autoregressive model can be parallelized with $O(log n)$ synchronization stages, which makes it much more efficient than other autoregressive models that require $Omega(n)$. Experiments on several benchmarks show that the proposed approach not only scales to orders of magnitude larger graphs than previously possible with deep autoregressive graph generative models, but also yields better graph generation quality.