ترغب بنشر مسار تعليمي؟ اضغط هنا

Solving Poissons Equation using Deep Learning in Particle Simulation of PN Junction

70   0   0.0 ( 0 )
 نشر من قبل Zhongyang Zhang
 تاريخ النشر 2018
والبحث باللغة English
 تأليف Zhongyang Zhang




اسأل ChatGPT حول البحث

Simulating the dynamic characteristics of a PN junction at the microscopic level requires solving the Poissons equation at every time step. Solving at every time step is a necessary but time-consuming process when using the traditional finite difference (FDM) approach. Deep learning is a powerful technique to fit complex functions. In this work, deep learning is utilized to accelerate solving Poissons equation in a PN junction. The role of the boundary condition is emphasized in the loss function to ensure a better fitting. The resulting I-V curve for the PN junction, using the deep learning solver presented in this work, shows a perfect match to the I-V curve obtained using the finite difference method, with the advantage of being 10 times faster at every time step.



قيم البحث

اقرأ أيضاً

Although deep-learning has been successfully applied in a variety of science and engineering problems owing to its strong high-dimensional nonlinear mapping capability, it is of limited use in scientific knowledge discovery. In this work, we propose a deep-learning based framework to discover the macroscopic governing equation of viscous gravity current based on high-resolution microscopic simulation data without the need for prior knowledge of underlying terms. For two typical scenarios with different viscosity ratios, the deep-learning based equations exactly capture the same dominated terms as the theoretically derived equations for describing long-term asymptotic behaviors, which validates the proposed framework. Unknown macroscopic equations are then obtained for describing short-term behaviors, and hidden mechanisms are eventually discovered with deep-learned explainable compensation terms and corresponding coefficients. Consequently, the presented deep-learning framework shows considerable potential for discovering unrevealed intrinsic laws in scientific semantic space from raw experimental or simulation results in data space.
The direct simulation of the dynamics of second sound in graphitic materials remains a challenging task due to lack of methodology for solving the phonon Boltzmann equation in such a stiff hydrodynamic regime. In this work, we aim to tackle this chal lenge by developing a multiscale numerical scheme for the transient phonon Boltzmann equation under Callaways dual relaxation model which captures well the collective phonon kinetics. Comparing to traditional numerical methods, the present multiscale scheme is efficient, accurate and stable in all transport regimes attributed to avoiding the use of time and spatial steps smaller than the relaxation time and mean free path of phonons. The formation, propagation and composition of ballistic pulses and second sound in graphene ribbon in two classical paradigms for experimental detection are investigated via the multiscale scheme. The second sound is declared to be mainly contributed by ZA phonon modes, whereas the ballistic pulses are mainly contributed by LA and TA phonon modes. The influence of temperature, isotope abundance and ribbon size on the second sound propagation is also explored. The speed of second sound in the observation window is found to be at most 20 percentages smaller than the theoretical value in hydrodynamic limit due to the finite umklapp, isotope and edge resistive scattering. The present study will contribute to not only the solution methodology of phonon Boltzmann equation, but also the physics of transient hydrodynamic phonon transport as guidance for future experimental detection.
111 - Suraj Pawar , Romit Maulik 2020
Several applications in the scientific simulation of physical systems can be formulated as control/optimization problems. The computational models for such systems generally contain hyperparameters, which control solution fidelity and computational e xpense. The tuning of these parameters is non-trivial and the general approach is to manually `spot-check for good combinations. This is because optimal hyperparameter configuration search becomes impractical when the parameter space is large and when they may vary dynamically. To address this issue, we present a framework based on deep reinforcement learning (RL) to train a deep neural network agent that controls a model solve by varying parameters dynamically. First, we validate our RL framework for the problem of controlling chaos in chaotic systems by dynamically changing the parameters of the system. Subsequently, we illustrate the capabilities of our framework for accelerating the convergence of a steady-state CFD solver by automatically adjusting the relaxation factors of discretized Navier-Stokes equations during run-time. The results indicate that the run-time control of the relaxation factors by the learned policy leads to a significant reduction in the number of iterations for convergence compared to the random selection of the relaxation factors. Our results point to potential benefits for learning adaptive hyperparameter learning strategies across different geometries and boundary conditions with implications for reduced computational campaign expenses. footnote{Data and codes available at url{https://github.com/Romit-Maulik/PAR-RL}}
166 - A. A. Skorupski 2006
Three pseudospectral algorithms are described (Euler, leapfrog and trapez) for solving numerically the time dependent nonlinear Schroedinger equation in one, two or three dimensions. Numerical stability regions in the parameter space are determined f or the cubic nonlinearity, which can be easily extended to other nonlinearities. For the first two algorithms, maximal timesteps for stability are calculated in terms of the maximal Fourier harmonics admitted by the spectral method used to calculate space derivatives. The formulas are directly applicable if the discrete Fourier transform is used, i.e. for periodic boundary conditions. These formulas were used in the relevant numerical programs developed in our group.
Accurate numerical solutions for the Schrodinger equation are of utmost importance in quantum chemistry. However, the computational cost of current high-accuracy methods scales poorly with the number of interacting particles. Combining Monte Carlo me thods with unsupervised training of neural networks has recently been proposed as a promising approach to overcome the curse of dimensionality in this setting and to obtain accurate wavefunctions for individual molecules at a moderately scaling computational cost. These methods currently do not exploit the regularity exhibited by wavefunctions with respect to their molecular geometries. Inspired by recent successful applications of deep transfer learning in machine translation and computer vision tasks, we attempt to leverage this regularity by introducing a weight-sharing constraint when optimizing neural network-based models for different molecular geometries. That is, we restrict the optimization process such that up to 95 percent of weights in a neural network model are in fact equal across varying molecular geometries. We find that this technique can accelerate optimization when considering sets of nuclear geometries of the same molecule by an order of magnitude and that it opens a promising route towards pre-trained neural network wavefunctions that yield high accuracy even across different molecules.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا