ترغب بنشر مسار تعليمي؟ اضغط هنا

Physics Informed Topology Learning in Networks of Linear Dynamical Systems

54   0   0.0 ( 0 )
 نشر من قبل Deepjyoti Deka
 تاريخ النشر 2018
والبحث باللغة English




اسأل ChatGPT حول البحث

Learning influence pathways of a network of dynamically related processes from observations is of considerable importance in many disciplines. In this article, influence networks of agents which interact dynamically via linear dependencies are considered. An algorithm for the reconstruction of the topology of interaction based on multivariate Wiener filtering is analyzed. It is shown that for a vast and important class of interactions, that respect flow conservation, the topology of the interactions can be exactly recovered. The class of problems where reconstruction is guaranteed to be exact includes power distribution networks, dynamic thermal networks and consensus networks. The efficacy of the approach is illustrated through simulation and experiments on consensus networks, IEEE power distribution networks and thermal dynamics of buildings.



قيم البحث

اقرأ أيضاً

360 - Sungyong Seo , Yan Liu 2019
While physics conveys knowledge of nature built from an interplay between observations and theory, it has been considered less importantly in deep neural networks. Especially, there are few works leveraging physics behaviors when the knowledge is giv en less explicitly. In this work, we propose a novel architecture called Differentiable Physics-informed Graph Networks (DPGN) to incorporate implicit physics knowledge which is given from domain experts by informing it in latent space. Using the concept of DPGN, we demonstrate that climate prediction tasks are significantly improved. Besides the experiment results, we validate the effectiveness of the proposed module and provide further applications of DPGN, such as inductive learning and multistep predictions.
Effective inclusion of physics-based knowledge into deep neural network models of dynamical systems can greatly improve data efficiency and generalization. Such a-priori knowledge might arise from physical principles (e.g., conservation laws) or from the systems design (e.g., the Jacobian matrix of a robot), even if large portions of the system dynamics remain unknown. We develop a framework to learn dynamics models from trajectory data while incorporating a-priori system knowledge as inductive bias. More specifically, the proposed framework uses physics-based side information to inform the structure of the neural network itself, and to place constraints on the values of the outputs and the internal states of the model. It represents the systems vector field as a composition of known and unknown functions, the latter of which are parametrized by neural networks. The physics-informed constraints are enforced via the augmented Lagrangian method during the models training. We experimentally demonstrate the benefits of the proposed approach on a variety of dynamical systems -- including a benchmark suite of robotics environments featuring large state spaces, non-linear dynamics, external forces, contact forces, and control inputs. By exploiting a-priori system knowledge during training, the proposed approach learns to predict the system dynamics two orders of magnitude more accurately than a baseline approach that does not include prior knowledge, given the same training dataset.
In this article, we present a method to learn the interaction topology of a network of agents undergoing linear consensus updates in a non invasive manner. Our approach is based on multivariate Wiener filtering, which is known to recover spurious edg es apart from the true edges in the topology. The main contribution of this work is to show that in the case of undirected consensus networks, all spurious links obtained using Wiener filtering can be identified using frequency response of the Wiener filters. Thus, the exact interaction topology of the agents is unveiled. The method presented requires time series measurements of the state of the agents and does not require any knowledge of link weights. To the best of our knowledge this is the first approach that provably reconstructs the structure of undirected consensus networks with correlated noise. We illustrate the effectiveness of the method developed through numerical simulations as well as experiments on a five node network of Raspberry Pis.
Recent advances show that neural networks embedded with physics-informed priors significantly outperform vanilla neural networks in learning and predicting the long term dynamics of complex physical systems from noisy data. Despite this success, ther e has only been a limited study on how to optimally combine physics priors to improve predictive performance. To tackle this problem we unpack and generalize recent innovations into individual inductive bias segments. As such, we are able to systematically investigate all possible combinations of inductive biases of which existing methods are a natural subset. Using this framework we introduce Variational Integrator Graph Networks - a novel method that unifies the strengths of existing approaches by combining an energy constraint, high-order symplectic variational integrators, and graph neural networks. We demonstrate, across an extensive ablation, that the proposed unifying framework outperforms existing methods, for data-efficient learning and in predictive accuracy, across both single and many-body problems studied in recent literature. We empirically show that the improvements arise because high order variational integrators combined with a potential energy constraint induce coupled learning of generalized position and momentum updates which can be formalized via the Partitioned Runge-Kutta method.
We introduce conditional PINNs (physics informed neural networks) for estimating the solution of classes of eigenvalue problems. The concept of PINNs is expanded to learn not only the solution of one particular differential equation but the solutions to a class of problems. We demonstrate this idea by estimating the coercive field of permanent magnets which depends on the width and strength of local defects. When the neural network incorporates the physics of magnetization reversal, training can be achieved in an unsupervised way. There is no need to generate labeled training data. The presented test cases have been rigorously studied in the past. Thus, a detailed and easy comparison with analytical solutions is made. We show that a single deep neural network can learn the solution of partial differential equations for an entire class of problems.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا