Do you want to publish a course? Click here

A classical density functional from machine learning and a convolutional neural network

54   0   0.0 ( 0 )
 Added by Shang-Chun Lin
 Publication date 2018
  fields Physics
and research's language is English




Ask ChatGPT about the research

We use machine learning methods to approximate a classical density functional. As a study case, we choose the model problem of a Lennard Jones fluid in one dimension where there is no exact solution available and training data sets must be obtained from simulations. After separating the excess free energy functional into a repulsive and an attractive part, machine learning finds a functional in weighted density form for the attractive part. The density profile at a hard wall shows good agreement for thermodynamic conditions beyond the training set conditions. This also holds for the equation of state if it is evaluated near the training temperature. We discuss the applicability to problems in higher dimensions.

rate research

Read More

We explore the feasibility of using machine learning methods to obtain an analytic form of the classical free energy functional for two model fluids, hard rods and Lennard--Jones, in one dimension . The Equation Learning Network proposed in Ref. 1 is suitably modified to construct free energy densities which are functions of a set of weighted densities and which are built from a small number of basis functions with flexible combination rules. This setup considerably enlarges the functional space used in the machine learning optimization as compared to previous work 2 where the functional is limited to a simple polynomial form. As a result, we find a good approximation for the exact hard rod functional and its direct correlation function. For the Lennard--Jones fluid, we let the network learn (i) the full excess free energy functional and (ii) the excess free energy functional related to interparticle attractions. Both functionals show a good agreement with simulated density profiles for thermodynamic parameters inside and outside the training region.
We present a modification to our recently published SAFT-based classical density functional theory for water. We have recently developed and tested a functional for the averaged radial distribution function at contact of the hard-sphere fluid that is dramatically more accurate at interfaces than earlier approximations. We now incorporate this improved functional into the association term of our free energy functional for water, improving its description of hydrogen bonding. We examine the effect of this improvement by studying two hard solutes: a hard hydrophobic rod and a hard sphere. The improved functional leads to a moderate change in the density profile and a large decrease in the number of hydrogen bonds broken in the vicinity of the solutes.We present a modification to our recently published SAFT-based classical density functional theory for water. We have recently developed and tested a functional for the averaged radial distribution function at contact of the hard-sphere fluid that is dramatically more accurate at interfaces than earlier approximations. We now incorporate this improved functional into the association term of our free energy functional for water, improving its description of hydrogen bonding. We examine the effect of this improvement by studying two hard solutes: a hard hydrophobic rod and a hard sphere. The improved functional leads to a moderate change in the density profile and a large decrease in the number of hydrogen bonds broken in the vicinity of the solutes.
176 - Hang Lu , Xin Wei , Ning Lin 2018
Inference efficiency is the predominant consideration in designing deep learning accelerators. Previous work mainly focuses on skipping zero values to deal with remarkable ineffectual computation, while zero bits in non-zero values, as another major source of ineffectual computation, is often ignored. The reason lies on the difficulty of extracting essential bits during operating multiply-and-accumulate (MAC) in the processing element. Based on the fact that zero bits occupy as high as 68.9% fraction in the overall weights of modern deep convolutional neural network models, this paper firstly proposes a weight kneading technique that could eliminate ineffectual computation caused by either zero value weights or zero bits in non-zero weights, simultaneously. Besides, a split-and-accumulate (SAC) computing pattern in replacement of conventional MAC, as well as the corresponding hardware accelerator design called Tetris are proposed to support weight kneading at the hardware level. Experimental results prove that Tetris could speed up inference up to 1.50x, and improve power efficiency up to 5.33x compared with the state-of-the-art baselines.
We propose a Molecular Hypergraph Convolutional Network (MolHGCN) that predicts the molecular properties of a molecule using the atom and functional group information as inputs. Molecules can contain many types of functional groups, which will affect the properties the molecules. For example, the toxicity of a molecule is associated with toxicophores, such as nitroaromatic groups and thiourea. Conventional graph-based methods that consider the pair-wise interactions between nodes are inefficient in expressing the complex relationship between multiple nodes in a graph flexibly, and applying multi-hops may result in oversmoothing and overfitting problems. Hence, we propose MolHGCN to capture the substructural difference between molecules using the atom and functional group information. MolHGCN constructs a hypergraph representation of a molecule using functional group information from the input SMILES strings, extracts hidden representation using a two-stage message passing process (atom and functional group message passing), and predicts the properties of the molecules using the extracted hidden representation. We evaluate the performance of our model using Tox21, ClinTox, SIDER, BBBP, BACE, ESOL, FreeSolv and Lipophilicity datasets. We show that our model is able to outperform other baseline methods for most of the datasets. We particularly show that incorporating functional group information along with atom information results in better separability in the latent space, thus increasing the prediction accuracy of the molecule property prediction.
Classical Machine Learning (ML) pipelines often comprise of multiple ML models where models, within a pipeline, are trained in isolation. Conversely, when training neural network models, layers composing the neural models are simultaneously trained using backpropagation. We argue that the isolated training scheme of ML pipelines is sub-optimal, since it cannot jointly optimize multiple components. To this end, we propose a framework that translates a pre-trained ML pipeline into a neural network and fine-tunes the ML models within the pipeline jointly using backpropagation. Our experiments show that fine-tuning of the translated pipelines is a promising technique able to increase the final accuracy.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا