ﻻ يوجد ملخص باللغة العربية
Chemical kinetics consists of the phenomenological framework for the disentanglement of reaction mechanisms, optimization of reaction performance and the rational design of chemical processes. Here, we utilize feed-forward artificial neural networks as basis functions for the construction of surrogate models to solve ordinary differential equations (ODEs) that describe microkinetic models (MKMs). We present an algebraic framework for the mathematical description and classification of reaction networks, types of elementary reaction, and chemical species. Under this framework, we demonstrate that the simultaneous training of neural nets and kinetic model parameters in a regularized multiobjective optimization setting leads to the solution of the inverse problem through the estimation of kinetic parameters from synthetic experimental data. We probe the limits at which kinetic parameters can be retrieved as a function of knowledge about the chemical system states over time, and assess the robustness of the methodology with respect to statistical noise. This surrogate approach to inverse kinetic ODEs can assist in the elucidation of reaction mechanisms based on transient data.
Recent work in scientific machine learning has developed so-called physics-informed neural network (PINN) models. The typical approach is to incorporate physical domain knowledge as soft constraints on an empirical loss function and use existing mach
We introduce conditional PINNs (physics informed neural networks) for estimating the solution of classes of eigenvalue problems. The concept of PINNs is expanded to learn not only the solution of one particular differential equation but the solutions
This paper presents a new approach for assembling graph neural networks based on framelet transforms. The latter provides a multi-scale representation for graph-structured data. We decompose an input graph into low-pass and high-pass frequencies coef
We introduce a pruning algorithm that provably sparsifies the parameters of a trained model in a way that approximately preserves the models predictive accuracy. Our algorithm uses a small batch of input points to construct a data-informed importance
We present a self-contained system for constructing natural language models for use in text compression. Our system improves upon previous neural network based models by utilizing recent advances in syntactic parsing -- Googles SyntaxNet -- to augmen