ترغب بنشر مسار تعليمي؟ اضغط هنا

Design of the monodomain model by artificial neural networks

71   0   0.0 ( 0 )
 نشر من قبل Sebastien Court
 تاريخ النشر 2021
  مجال البحث
والبحث باللغة English




اسأل ChatGPT حول البحث

We propose an optimal control approach in order to identify the nonlinearity in the monodomain model, from given data. This data-driven approach gives an answer to the problem of selecting the model when studying phenomena related to cardiac electrophysiology. Instead of determining coefficients of a prescribed model (like the FitzHugh-Nagumo model for instance) from empirical observations, we design the model itself, in the form of an artificial neural network. The relevance of this approach relies on the capacity approximations of neural networks. We formulate this inverse problem as an optimal control problem, and provide mathematical analysis and derivation of optimality conditions. One of the difficulties comes from the lack of smoothness of activation functions which are classically used for training neural networks. Numerical simulations demonstrate the feasibility of the strategy proposed in this work.

قيم البحث

اقرأ أيضاً

We consider a nonlinear reaction diffusion system of parabolic type known as the monodomain equations, which model the interaction of the electric current in a cell. Together with the FitzHugh-Nagumo model for the nonlinearity they represent defibril lation processes of the human heart. We study a fairly general type with co-located inputs and outputs describing both boundary and distributed control and observation. The control objective is output trajectory tracking with prescribed performance. To achieve this we employ the funnel controller, which is model-free and of low complexity. The controller introduces a nonlinear and time-varying term in the closed-loop system, for which we prove existence and uniqueness of solutions. Additionally, exploiting the parabolic nature of the problem, we obtain Holder continuity of the state, inputs and outputs. We illustrate our results by a simulation of a standard test example for the termination of reentry waves.
In this paper we propose a new computational method for designing optimal regulators for high-dimensional nonlinear systems. The proposed approach leverages physics-informed machine learning to solve high-dimensional Hamilton-Jacobi-Bellman equations arising in optimal feedback control. Concretely, we augment linear quadratic regulators with neural networks to handle nonlinearities. We train the augmented models on data generated without discretizing the state space, enabling application to high-dimensional problems. We use the proposed method to design a candidate optimal regulator for an unstable Burgers equation, and through this example, demonstrate improved robustness and accuracy compared to existing neural network formulations.
249 - Yin Zhang , Yueyao Yu 2021
What makes an artificial neural network easier to train and more likely to produce desirable solutions than other comparable networks? In this paper, we provide a new angle to study such issues under the setting of a fixed number of model parameters which in general is the most dominant cost factor. We introduce a notion of variability and show that it correlates positively to the activation ratio and negatively to a phenomenon called {Collapse to Constants} (or C2C), which is closely related but not identical to the phenomenon commonly known as vanishing gradient. Experiments on a styled model problem empirically verify that variability is indeed a key performance indicator for fully connected neural networks. The insights gained from this variability study will help the design of new and effective neural network architectures.
Particle may sometimes have energy outside the range of radiation detection hardware so that the signal is saturated and useful information is lost. We have therefore investigated the possibility of using an Artificial Neural Network (ANN) to restore the saturated waveforms of $gamma$ signals. Several ANNs were tested, namely the Back Propagation (BP), Simple Recurrent (Elman), Radical Basis Function (RBF) and Generalized Radial Basis Function (GRBF) neural networks (NNs) and compared with the fitting method based on the Marrone model. The GBRFNN was found to perform best.
The application of differential privacy to the training of deep neural networks holds the promise of allowing large-scale (decentralized) use of sensitive data while providing rigorous privacy guarantees to the individual. The predominant approach to differentially private training of neural networks is DP-SGD, which relies on norm-based gradient clipping as a method for bounding sensitivity, followed by the addition of appropriately calibrated Gaussian noise. In this work we propose NeuralDP, a technique for privatising activations of some layer within a neural network, which by the post-processing properties of differential privacy yields a differentially private network. We experimentally demonstrate on two datasets (MNIST and Pediatric Pneumonia Dataset (PPD)) that our method offers substantially improved privacy-utility trade-offs compared to DP-SGD.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا