ترغب بنشر مسار تعليمي؟ اضغط هنا

Deep neural network Grad-Shafranov solver constrained with measured magnetic signals

138   0   0.0 ( 0 )
 نشر من قبل Semin Joung
 تاريخ النشر 2019
والبحث باللغة English




اسأل ChatGPT حول البحث

A neural network solving Grad-Shafranov equation constrained with measured magnetic signals to reconstruct magnetic equilibria in real time is developed. Database created to optimize the neural networks free parameters contain off-line EFIT results as the output of the network from $1,118$ KSTAR experimental discharges of two different campaigns. Input data to the network constitute magnetic signals measured by a Rogowski coil (plasma current), magnetic pick-up coils (normal and tangential components of magnetic fields) and flux loops (poloidal magnetic fluxes). The developed neural networks fully reconstruct not only the poloidal flux function $psileft( R, Zright)$ but also the toroidal current density function $j_phileft( R, Zright)$ with the off-line EFIT quality. To preserve robustness of the networks against a few missing input data, an imputation scheme is utilized to eliminate the required additional training sets with large number of possible combinations of the missing inputs.

قيم البحث

اقرأ أيضاً

70 - J. W. Burby , N. Kallinikos , 2020
The structure of static MHD equilibria that admit continuous families of Euclidean symmetries is well understood. Such field configurations are governed by the classical Grad-Shafranov equation, which is a single elliptic PDE in two space dimensions. By revealing a hidden symmetry, we show that in fact all smooth solutions of the equilibrium equations with non-vanishing pressure gradients away from the magnetic axis satisfy a generalization of the Grad-Shafranov equation. In contrast to solutions of the classical Grad-Shafranov equation, solutions of he generalized equation are not automatically equilibria, but instead only satisfy force balance averaged over the one-parameter hidden symmetry. We then explain how the generalized Grad-Shafranov equation can be used to reformulate the problem of finding exact three-dimensional smooth solutions of the equilibrium equations as finding an optimal volume-preserving symmetry.
This article completes and extends a recent study of the Grad-Shafranov (GS) reconstruction in toroidal geometry, as applied to a two and a half dimensional configurations in space plasmas with rotational symmetry. A further application to the benchm ark study of an analytic solution to the toroidal GS equation with added noise shows deviations in the reconstructed geometry of the flux rope configuration, characterized by the orientation of the rotation axis, the major radius, and the impact parameter. On the other hand, the physical properties of the flux rope, including the axial field strength, and the toroidal and poloidal magnetic flux, agree between the numerical and exact GS solutions. We also present a real event study of a magnetic cloud flux rope from textit{in situ} spacecraft measurements. The devised procedures for toroidal GS reconstruction are successfully executed. Various geometrical and physical parameters are obtained with associated uncertainty estimates. The overall configuration of the flux rope from the GS reconstruction is compared with the corresponding morphological reconstruction based on white-light images. The results show overall consistency, but also discrepancy in that the inclination angle of the flux rope central axis with respect to the ecliptic plane differs by about 20-30 degrees in the plane of the sky. We also compare the results with the original straight-cylinder GS reconstruction and discuss our findings.
In this paper, we present a physics-constrained deep neural network (PCDNN) method for parameter estimation in the zero-dimensional (0D) model of the vanadium redox flow battery (VRFB). In this approach, we use deep neural networks (DNNs) to approxim ate the model parameters as functions of the operating conditions. This method allows the integration of the VRFB computational models as the physical constraints in the parameter learning process, leading to enhanced accuracy of parameter estimation and cell voltage prediction. Using an experimental dataset, we demonstrate that the PCDNN method can estimate model parameters for a range of operating conditions and improve the 0D model prediction of voltage compared to the 0D model prediction with constant operation-condition-independent parameters estimated with traditional inverse methods. We also demonstrate that the PCDNN approach has an improved generalization ability for estimating parameter values for operating conditions not used in the DNN training.
Encoding domain knowledge into the prior over the high-dimensional weight space of a neural network is challenging but essential in applications with limited data and weak signals. Two types of domain knowledge are commonly available in scientific ap plications: 1. feature sparsity (fraction of features deemed relevant); 2. signal-to-noise ratio, quantified, for instance, as the proportion of variance explained (PVE). We show how to encode both types of domain knowledge into the widely used Gaussian scale mixture priors with Automatic Relevance Determination. Specifically, we propose a new joint prior over the local (i.e., feature-specific) scale parameters that encodes knowledge about feature sparsity, and a Stein gradient optimization to tune the hyperparameters in such a way that the distribution induced on the models PVE matches the prior distribution. We show empirically that the new prior improves prediction accuracy, compared to existing neural network priors, on several publicly available datasets and in a genetics application where signals are weak and sparse, often outperforming even computationally intensive cross-validation for hyperparameter tuning.
Gaining insight into how deep convolutional neural network models perform image classification and how to explain their outputs have been a concern to computer vision researchers and decision makers. These deep models are often referred to as black b ox due to low comprehension of their internal workings. As an effort to developing explainable deep learning models, several methods have been proposed such as finding gradients of class output with respect to input image (sensitivity maps), class activation map (CAM), and Gradient based Class Activation Maps (Grad-CAM). These methods under perform when localizing multiple occurrences of the same class and do not work for all CNNs. In addition, Grad-CAM does not capture the entire object in completeness when used on single object images, this affect performance on recognition tasks. With the intention to create an enhanced visual explanation in terms of visual sharpness, object localization and explaining multiple occurrences of objects in a single image, we present Smooth Grad-CAM++ footnote{Simple demo: http://35.238.22.135:5000/}, a technique that combines methods from two other recent techniques---SMOOTHGRAD and Grad-CAM++. Our Smooth Grad-CAM++ technique provides the capability of either visualizing a layer, subset of feature maps, or subset of neurons within a feature map at each instance at the inference level (model prediction process). After experimenting with few images, Smooth Grad-CAM++ produced more visually sharp maps with better localization of objects in the given input images when compared with other methods.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا