ترغب بنشر مسار تعليمي؟ اضغط هنا

Fourier-domain Variational Formulation and Its Well-posedness for Supervised Learning

65   0   0.0 ( 0 )
 نشر من قبل Zhiwei Wang
 تاريخ النشر 2020
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

A supervised learning problem is to find a function in a hypothesis function space given values on isolated data points. Inspired by the frequency principle in neural networks, we propose a Fourier-domain variational formulation for supervised learning problem. This formulation circumvents the difficulty of imposing the constraints of given values on isolated data points in continuum modelling. Under a necessary and sufficient condition within our unified framework, we establish the well-posedness of the Fourier-domain variational problem, by showing a critical exponent depending on the data dimension. In practice, a neural network can be a convenient way to implement our formulation, which automatically satisfies the well-posedness condition.

قيم البحث

اقرأ أيضاً

116 - Suchuan Dong , Zongwei Li 2020
We present a neural network-based method for solving linear and nonlinear partial differential equations, by combining the ideas of extreme learning machines (ELM), domain decomposition and local neural networks. The field solution on each sub-domain is represented by a local feed-forward neural network, and $C^k$ continuity is imposed on the sub-domain boundaries. Each local neural network consists of a small number of hidden layers, while its last hidden layer can be wide. The weight/bias coefficients in all hidden layers of the local neural networks are pre-set to random values and are fixed, and only the weight coefficients in the output layers are training parameters. The overall neural network is trained by a linear or nonlinear least squares computation, not by the back-propagation type algorithms. We introduce a block time-marching scheme together with the presented method for long-time dynamic simulations. The current method exhibits a clear sense of convergence with respect to the degrees of freedom in the neural network. Its numerical errors typically decrease exponentially or nearly exponentially as the number of degrees of freedom increases. Extensive numerical experiments have been performed to demonstrate the computational performance of the presented method. We compare the current method with the deep Galerkin method (DGM) and the physics-informed neural network (PINN) in terms of the accuracy and computational cost. The current method exhibits a clear superiority, with its numerical errors and network training time considerably smaller (typically by orders of magnitude) than those of DGM and PINN. We also compare the current method with the classical finite element method (FEM). The computational performance of the current method is on par with, and oftentimes exceeds, the FEM performance.
Supervised models of NLP rely on large collections of text which closely resemble the intended testing setting. Unfortunately matching text is often not available in sufficient quantity, and moreover, within any domain of text, data is often highly h eterogenous. In this paper we propose a method to distill the important domain signal as part of a multi-domain learning system, using a latent variable model in which parts of a neural model are stochastically gated based on the inferred domain. We compare the use of discrete versus continuous latent variables, operating in a domain-supervised or a domain semi-supervised setting, where the domain is known only for a subset of training inputs. We show that our model leads to substantial performance improvements over competitive benchmark domain adaptation methods, including methods using adversarial learning.
74 - Jongho Park 2020
In this paper, we propose a novel overlapping domain decomposition method that can be applied to various problems in variational imaging such as total variation minimization. Most of recent domain decomposition methods for total variation minimizatio n adopt the Fenchel--Rockafellar duality, whereas the proposed method is based on the primal formulation. Thus, the proposed method can be applied not only to total variation minimization but also to those with complex dual problems such as higher order models. In the proposed method, an equivalent formulation of the model problem with parallel structure is constructed using a custom overlapping domain decomposition scheme with the notion of essential domains. As a solver for the constructed formulation, we propose a decoupled augmented Lagrangian method for untying the coupling of adjacent subdomains. Convergence analysis of the decoupled augmented Lagrangian method is provided. We present implementation details and numerical examples for various model problems including total variation minimizations and higher order models.
We analyse a PDE system modelling poromechanical processes (formulated in mixed form using the solid deformation, fluid pressure, and total pressure) interacting with diffusing and reacting solutes in the medium. We investigate the well-posedness of the nonlinear set of equations using fixed-point theory, Fredholms alternative, a priori estimates, and compactness arguments. We also propose a mixed finite element method and rigorously demonstrate the stability of the scheme. Error estimates are derived in suitable norms, and numerical experiments are conducted to illustrate the mechano-chemical coupling and to verify the theoretical rates of convergence.
Various phenomena in biology, physics, and engineering are modeled by differential equations. These differential equations including partial differential equations and ordinary differential equations can be converted and represented as integral equat ions. In particular, Volterra Fredholm Hammerstein integral equations are the main type of these integral equations and researchers are interested in investigating and solving these equations. In this paper, we propose Legendre Deep Neural Network (LDNN) for solving nonlinear Volterra Fredholm Hammerstein integral equations (VFHIEs). LDNN utilizes Legendre orthogonal polynomials as activation functions of the Deep structure. We present how LDNN can be used to solve nonlinear VFHIEs. We show using the Gaussian quadrature collocation method in combination with LDNN results in a novel numerical solution for nonlinear VFHIEs. Several examples are given to verify the performance and accuracy of LDNN.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا