ترغب بنشر مسار تعليمي؟ اضغط هنا

Efficient hinging hyperplanes neural network and its application in nonlinear system identification

169   0   0.0 ( 0 )
 نشر من قبل Jun Xu
 تاريخ النشر 2019
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

In this paper, the efficient hinging hyperplanes (EHH) neural network is proposed based on the model of hinging hyperplanes (HH). The EHH neural network is a distributed representation, the training of which involves solving several convex optimization problems and is fast. It is proved that for every EHH neural network, there is an equivalent adaptive hinging hyperplanes (AHH) tree, which was also proposed based on the model of HH and find good applications in system identification. The construction of the EHH neural network includes 2 stages. First the initial structure of the EHH neural network is randomly determined and the Lasso regression is used to choose the appropriate network. To alleviate the impact of randomness, secondly, the stacking strategy is employed to formulate a more general network structure. Different from other neural networks, the EHH neural network has interpretability ability, which can be easily obtained through its ANOVA decomposition (or interaction matrix). The interpretability can then be used as a suggestion for input variable selection. The EHH neural network is applied in nonlinear system identification, the simulation results show that the regression vector selected is reasonable and the identification speed is fast, while at the same time, the simulation accuracy is satisfactory.



قيم البحث

اقرأ أيضاً

63 - Xinglong Liang , Jun Xu 2020
ReLU (rectified linear units) neural network has received significant attention since its emergence. In this paper, a univariate ReLU (UReLU) neural network is proposed to both modelling the nonlinear dynamic system and revealing insights about the s ystem. Specifically, the neural network consists of neurons with linear and UReLU activation functions, and the UReLU functions are defined as the ReLU functions respect to each dimension. The UReLU neural network is a single hidden layer neural network, and the structure is relatively simple. The initialization of the neural network employs the decoupling method, which provides a good initialization and some insight into the nonlinear system. Compared with normal ReLU neural network, the number of parameters of UReLU network is less, but it still provide a good approximation of the nonlinear dynamic system. The performance of the UReLU neural network is shown through a Hysteretic benchmark system: the Bouc-Wen system. Simulation results verify the effectiveness of the proposed method.
We study the problem of sparse nonlinear model recovery of high dimensional compositional functions. Our study is motivated by emerging opportunities in neuroscience to recover fine-grained models of biological neural circuits using collected measure ment data. Guided by available domain knowledge in neuroscience, we explore conditions under which one can recover the underlying biological circuit that generated the training data. Our results suggest insights of both theoretical and practical interests. Most notably, we find that a sign constraint on the weights is a necessary condition for system recovery, which we establish both theoretically with an identifiability guarantee and empirically on simulated biological circuits. We conclude with a case study on retinal ganglion cell circuits using data collected from mouse retina, showcasing the practical potential of this approach.
Various phenomena in biology, physics, and engineering are modeled by differential equations. These differential equations including partial differential equations and ordinary differential equations can be converted and represented as integral equat ions. In particular, Volterra Fredholm Hammerstein integral equations are the main type of these integral equations and researchers are interested in investigating and solving these equations. In this paper, we propose Legendre Deep Neural Network (LDNN) for solving nonlinear Volterra Fredholm Hammerstein integral equations (VFHIEs). LDNN utilizes Legendre orthogonal polynomials as activation functions of the Deep structure. We present how LDNN can be used to solve nonlinear VFHIEs. We show using the Gaussian quadrature collocation method in combination with LDNN results in a novel numerical solution for nonlinear VFHIEs. Several examples are given to verify the performance and accuracy of LDNN.
88 - Ningrui Zhao , Jinwei Lu 2021
Distillation process is a complex process of conduction, mass transfer and heat conduction, which is mainly manifested as follows: The mechanism is complex and changeable with uncertainty; the process is multivariate and strong coupling; the system i s nonlinear, hysteresis and time-varying. Neural networks can perform effective learning based on corresponding samples, do not rely on fixed mechanisms, have the ability to approximate arbitrary nonlinear mappings, and can be used to establish system input and output models. The temperature system of the rectification tower has a complicated structure and high accuracy requirements. The neural network is used to control the temperature of the system, which satisfies the requirements of the production process. This article briefly describes the basic concepts and research progress of neural network and distillation tower temperature control, and systematically summarizes the application of neural network in distillation tower control, aiming to provide reference for the development of related industries.
Model instability and poor prediction of long-term behavior are common problems when modeling dynamical systems using nonlinear black-box techniques. Direct optimization of the long-term predictions, often called simulation error minimization, leads to optimization problems that are generally non-convex in the model parameters and suffer from multiple local minima. In this work we present methods which address these problems through convex optimization, based on Lagrangian relaxation, dissipation inequalities, contraction theory, and semidefinite programming. We demonstrate the proposed methods with a model order reduction task for electronic circuit design and the identification of a pneumatic actuator from experiment.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا