ﻻ يوجد ملخص باللغة العربية
Machine learning models have been successfully used in many scientific and engineering fields. However, it remains difficult for a model to simultaneously utilize domain knowledge and experimental observation data. The application of knowledge-based symbolic AI represented by an expert system is limited by the expressive ability of the model, and data-driven connectionism AI represented by neural networks is prone to produce predictions that violate physical mechanisms. In order to fully integrate domain knowledge with observations, and make full use of the prior information and the strong fitting ability of neural networks, this study proposes theory-guided hard constraint projection (HCP). This model converts physical constraints, such as governing equations, into a form that is easy to handle through discretization, and then implements hard constraint optimization through projection. Based on rigorous mathematical proofs, theory-guided HCP can ensure that model predictions strictly conform to physical mechanisms in the constraint patch. The performance of the theory-guided HCP is verified by experiments based on the heterogeneous subsurface flow problem. Due to the application of hard constraints, compared with fully connected neural networks and soft constraint models, such as theory-guided neural networks and physics-informed neural networks, theory-guided HCP requires fewer data, and achieves higher prediction accuracy and stronger robustness to noisy observations.
The design of efficient and generic algorithms for solving combinatorial optimization problems has been an active field of research for many years. Standard exact solving approaches are based on a clever and complete enumeration of the solution set.
Supervised machine learning has several drawbacks that make it difficult to use in many situations. Drawbacks include: heavy reliance on massive training data, limited generalizability and poor expressiveness of high-level semantics. Low-shot Learnin
Black-box problems are common in real life like structural design, drug experiments, and machine learning. When optimizing black-box systems, decision-makers always consider multiple performances and give the final decision by comprehensive evaluatio
Reinforcement Learning (RL) has made remarkable achievements, but it still suffers from inadequate exploration strategies, sparse reward signals, and deceptive reward functions. These problems motivate the need for a more efficient and directed explo
Knowledge distillation has become one of the most important model compression techniques by distilling knowledge from larger teacher networks to smaller student ones. Although great success has been achieved by prior distillation methods via delicate