ترغب بنشر مسار تعليمي؟ اضغط هنا

Gradient-Based Training and Pruning of Radial Basis Function Networks with an Application in Materials Physics

82   0   0.0 ( 0 )
 نشر من قبل Jyri Kimari
 تاريخ النشر 2020
والبحث باللغة English




اسأل ChatGPT حول البحث

Many applications, especially in physics and other sciences, call for easily interpretable and robust machine learning techniques. We propose a fully gradient-based technique for training radial basis function networks with an efficient and scalable open-source implementation. We derive novel closed-form optimization criteria for pruning the models for continuous as well as binary data which arise in a challenging real-world material physics problem. The pruned models are optimized to provide compact and interpretab


قيم البحث

اقرأ أيضاً

We investigate the benefits of feature selection, nonlinear modelling and online learning when forecasting in financial time series. We consider the sequential and continual learning sub-genres of online learning. The experiments we conduct show that there is a benefit to online transfer learning, in the form of radial basis function networks, beyond the sequential updating of recursive least-squares models. We show that the radial basis function networks, which make use of clustering algorithms to construct a kernel Gram matrix, are more beneficial than treating each training vector as separate basis functions, as occurs with kernel Ridge regression. We demonstrate quantitative procedures to determine the very structure of the radial basis function networks. Finally, we conduct experiments on the log returns of financial time series and show that the online learning models, particularly the radial basis function networks, are able to outperform a random walk baseline, whereas the offline learning models struggle to do so.
We introduce and investigate matrix approximation by decomposition into a sum of radial basis function (RBF) components. An RBF component is a generalization of the outer product between a pair of vectors, where an RBF function replaces the scalar mu ltiplication between individual vector elements. Even though the RBF functions are positive definite, the summation across components is not restricted to convex combinations and allows us to compute the decomposition for any real matrix that is not necessarily symmetric or positive definite. We formulate the problem of seeking such a decomposition as an optimization problem with a nonlinear and non-convex loss function. Several mode
323 - Changpeng Shao 2019
Radial basis function (RBF) network is a third layered neural network that is widely used in function approximation and data classification. Here we propose a quantum model of the RBF network. Similar to the classical case, we still use the radial ba sis functions as the activation functions. Quantum linear algebraic techniques and coherent states can be applied to implement these functions. Differently, we define the state of the weight as a tensor product of single-qubit states. This gives a simple approach to implement the quantum RBF network in the quantum circuits. Theoretically, we prove that the training is almost quadratic faster than the classical one. Numerically, we demonstrate that the quantum RBF network can solve binary classification problems as good as the classical RBF network. While the time used for training is much shorter.
Atomic-scale materials synthesis via layer deposition techniques present a unique opportunity to control material structures and yield systems that display unique functional properties that cannot be stabilized using traditional bulk synthetic routes . However, the deposition process itself presents a large, multidimensional space that is traditionally optimized via intuition and trial and error, slowing down progress. Here, we present an application of deep reinforcement learning to a simulated materials synthesis problem, utilizing the Stein variational policy gradient (SVPG) approach to train multiple agents to optimize a stochastic policy to yield desired functional properties. Our contributions are (1) A fully open source simulation environment for layered materials synthesis problems, utilizing a kinetic Monte-Carlo engine and implemented in the OpenAI Gym framework, (2) Extension of the Stein variational policy gradient approach to deal with both image and tabular input, and (3) Developing a parallel (synchronous) implementation of SVPG using Horovod, distributing multiple agents across GPUs and individual simulation environments on CPUs. We demonstrate the utility of this approach in optimizing for a material surface characteristic, surface roughness, and explore the strategies used by the agents as compared with a traditional actor-critic (A2C) baseline. Further, we find that SVPG stabilizes the training process over traditional A2C. Such trained agents can be useful to a variety of atomic-scale deposition techniques, including pulsed laser deposition and molecular beam epitaxy, if the implementation challenges are addressed.
We present the remote stochastic gradient (RSG) method, which computes the gradients at configurable remote observation points, in order to improve the convergence rate and suppress gradient noise at the same time for different curvatures. RSG is fur ther combined with adaptive methods to construct ARSG for acceleration. The method is efficient in computation and memory, and is straightforward to implement. We analyze the convergence properties by modeling the training process as a dynamic system, which provides a guideline to select the configurable observation factor without grid search. ARSG yields $O(1/sqrt{T})$ convergence rate in non-convex settings, that can be further improved to $O(log(T)/T)$ in strongly convex settings. Numerical experiments demonstrate that ARSG achieves both faster convergence and better generalization, compared with popular adaptive methods, such as ADAM, NADAM, AMSGRAD, and RANGER for the tested problems. In particular, for training ResNet-50 on ImageNet, ARSG outperforms ADAM in convergence speed and meanwhile it surpasses SGD in generalization.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا