Do you want to publish a course? Click here

Spatial symmetries and invariances play an important role in the description of materials. When modelling material properties, it is important to be able to respect such invariances. Here we discuss how to model and generate random ensembles of tensors where one wants to be able to prescribe certain classes of spatial symmetries and invariances for the whole ensemble, while at the same time demanding that the mean or expected value of the ensemble be subject to a possibly higher spatial invariance class. Our special interest is in the class of physically symmetric and positive definite tensors, as they appear often in the description of materials. As the set of positive definite tensors is not a linear space, but rather an open convex cone in the linear vector space of physically symmetric tensors, it may be advantageous to widen the notion of mean to the so-called Frechet mean, which is based on distance measures between positive definite tensors other than the usual Euclidean one. For the sake of simplicity, as well as to expose the main idea as clearly as possible, we limit ourselves here to second order tensors. It is shown how the random ensemble can be modelled and generated, with fine control of the spatial symmetry or invariance of the whole ensemble, as well as its Frechet mean, independently in its scaling and directional aspects. As an example, a 2D and a 3D model of steady-state heat conduction in a human proximal femur, a bone with high material anisotropy, is explored. It is modelled with a random thermal conductivity tensor, and the numerical results show the distinct impact of incorporating into the constitutive model different material uncertainties$-$scaling, orientation, and prescribed material symmetry$-$on the desired quantities of interest, such as temperature distribution and heat flux.
Partial Differential Equations (PDEs) are notoriously difficult to solve. In general, closed-form solutions are not available and numerical approximation schemes are computationally expensive. In this paper, we propose to approach the solution of PDEs based on a novel technique that combines the advantages of two recently emerging machine learning based approaches. First, physics-informed neural networks (PINNs) learn continuous solutions of PDEs and can be trained with little to no ground truth data. However, PINNs do not generalize well to unseen domains. Second, convolutional neural networks provide fast inference and generalize but either require large amounts of training data or a physics-constrained loss based on finite differences that can lead to inaccuracies and discretization artifacts. We leverage the advantages of both of these approaches by using Hermite spline kernels in order to continuously interpolate a grid-based state representation that can be handled by a CNN. This allows for training without any precomputed training data using a physics-informed loss function only and provides fast, continuous solutions that generalize to unseen domains. We demonstrate the potential of our method at the examples of the incompressible Navier-Stokes equation and the damped wave equation. Our models are able to learn several intriguing phenomena such as Karman vortex streets, the Magnus effect, Doppler effect, interference patterns and wave reflections. Our quantitative assessment and an interactive real-time demo show that we are narrowing the gap in accuracy of unsupervised ML based methods to industrial CFD solvers while being orders of magnitude faster.
Simulation experiments are typically conducted repeatedly during the model development process, for example, to re-validate if a behavioral property still holds after several model changes. Approaches for automatically reusing and generating simulation experiments can support modelers in conducting simulation studies in a more systematic and effective manner. They rely on explicit experiment specifications and, so far, on user interaction for initiating the reuse. Thereby, they are constrained to support the reuse of simulation experiments in a specific setting. Our approach now goes one step further by automatically identifying and adapting the experiments to be reused for a variety of scenarios. To achieve this, we exploit provenance graphs of simulation studies, which provide valuable information about the previous modeling and experimenting activities, and contain meta-information about the different entities that were used or produced during the simulation study. We define provenance patterns and associate them with a semantics, which allows us to interpret the different activities, and construct transformation rules for provenance graphs. Our approach is implemented in a Reuse and Adapt framework for Simulation Experiments (RASE) which can interface with various modeling and simulation tools. In the case studies, we demonstrate the utility of our framework for a) the repeated sensitivity analysis of an agent-based model of migration routes, and b) the cross-validation of two models of a cell signaling pathway.
The financial time series analysis is important access to touch the complex laws of financial markets. Among many goals of the financial time series analysis, one is to construct a model that can extract the information of the future return out of the known historical stock data, such as stock price, financial news, and e.t.c. To design such a model, prior knowledge on how the future return is correlated with the historical stock prices is needed. In this work, we focus on the issue: in what mode the future return is correlated with the historical stock prices. We manually design several financial time series where the future return is correlated with the historical stock prices in pre-designed modes, namely the curve-shape-feature (CSF) and the non-curve-shape-feature (NCSF) modes. In the CSF mode, the future return can be extracted from the curve shapes of the historical stock prices. By applying various kinds of existing algorithms on those pre-designed time series and real financial time series, we show that: (1) the major information of the future return is not contained in the curve-shape features of historical stock prices. That is, the future return is not mainly correlated with the historical stock prices in the CSF mode. (2) Various kinds of existing machine learning algorithms are good at extracting the curveshape features in the historical stock prices and thus are inappropriate for financial time series analysis although they are successful in the image recognition and natural language processing. That is, new models handling the NCSF series are needed in the financial time series analysis.
Motivation: Omics data, such as transcriptomics or phosphoproteomics, are broadly used to get a snap-shot of the molecular status of cells. In particular, changes in omics can be used to estimate the activity of pathways, transcription factors and kinases based on known regulated targets, that we call footprints. Then the molecular paths driving these activities can be estimated using causal reasoning on large signaling networks. Results: We have developed FUNKI, a FUNctional toolKIt for footprint analysis. It provides a user-friendly interface for an easy and fast analysis of several omics data, either from bulk or single-cell experiments. FUNKI also features different options to visualise the results and run post-analyses, and is mirrored as a scripted version in R. Availability: FUNKI is a free and open-source application built on R and Shiny, available in GitHub at https://github.com/saezlab/ShinyFUNKI under GNU v3.0 license and accessible also in https://saezlab.shinyapps.io/funki/ Contact: [email protected] Supplementary information: We provide data examples within the app, as well as extensive information about the different variables to select, the results, and the different plots in the help page.
Variational phase-field methods have been shown powerful for the modeling of complex crack propagation without a priori knowledge of the crack path or ad hoc criteria. However, phase-field models suffer from their energy functional being non-linear and non-convex, while requiring a very fine mesh to capture the damage gradient. This implies a high computational cost, limiting concrete engineering applications of the method. In this work, we propose an efficient and robust fully monolithic solver for phase-field fracture using a modified Newton method with inertia correction and an energy line-search. To illustrate the gains in efficiency obtained with our approach, we compare it to two popular methods for phase-field fracture, namely the alternating minimization and the quasi-monolithic schemes. To facilitate the evaluation of the time step dependent quasi-monolithic scheme, we couple the latter with an extrapolation correction loop controlled by a damage-based criteria. Finally, we show through four benchmark tests that the modified Newton method we propose is straightforward, robust, and leads to identical solutions, while offering a reduction in computation time by factors of up to 12 and 6 when compared to the alternating minimization and quasi-monolithic schemes.
The Linac Coherent Light Source (LCLS) is an X- ray free electron laser (XFEL) facility enabling the study of the structure and dynamics of single macromolecules. A major upgrade will bring the repetition rate of the X-ray source from 120 to 1 million pulses per second. Exascale high performance computing (HPC) capabilities will be required to process the corresponding data rates. We present SpiniFEL, an application used for structure determination of proteins from single-particle imaging (SPI) experiments. An emerging technique for imaging individual proteins and other large molecular complexes by outrunning radiation damage, SPI breaks free from the need for crystallization (which is difficult for some proteins) and allows for imaging molecular dynamics at near ambient conditions. SpiniFEL is being developed to run on supercomputers in near real-time while an experiment is taking place, so that the feedback about the data can guide the data collection strategy. We describe here how we reformulated the mathematical framework for parallelizable implementation and accelerated the most compute intensive parts of the application. We also describe the use of Pygion, a Python interface for the Legion task-based programming model and compare to our existing MPI+GPU implementation.
High resolution crop type maps are an important tool for improving food security, and remote sensing is increasingly used to create such maps in regions that possess ground truth labels for model training. However, these labels are absent in many regions, and models trained in other regions on typical satellite features, such as those from optical sensors, often exhibit low performance when transferred. Here we explore the use of NASAs Global Ecosystem Dynamics Investigation (GEDI) spaceborne lidar instrument, combined with Sentinel-2 optical data, for crop type mapping. Using data from three major cropped regions (in China, France, and the United States) we first demonstrate that GEDI energy profiles are capable of reliably distinguishing maize, a crop typically above 2m in height, from crops like rice and soybean that are shorter. We further show that these GEDI profiles provide much more invariant features across geographies compared to spectral and phenological features detected by passive optical sensors. GEDI is able to distinguish maize from other crops within each region with accuracies higher than 84%, and able to transfer across regions with accuracies higher than 82% compared to 64% for transfer of optical features. Finally, we show that GEDI profiles can be used to generate training labels for models based on optical imagery from Sentinel-2, thereby enabling the creation of 10m wall-to-wall maps of tall versus short crops in label-scarce regions. As maize is the second most widely grown crop in the world and often the only tall crop grown within a landscape, we conclude that GEDI offers great promise for improving global crop type maps.
Hybrid modeling, the combination of first principle and machine learning models, is an emerging research field that gathers more and more attention. Even if hybrid models produce formidable results for academic examples, there are still different technical challenges that hinder the use of hybrid modeling in real-world applications. By presenting NeuralFMUs, the fusion of a FMU, a numerical ODE solver and an ANN, we are paving the way for the use of a variety of first principle models from different modeling tools as parts of hybrid models. This contribution handles the hybrid modeling of a complex, real-world example: Starting with a simplified 1D-fluid model of the human cardiovascular system (arterial side), the aim is to learn neglected physical effects like arterial elasticity from data. We will show that the hybrid modeling process is more comfortable, needs less system knowledge and is therefore less error-prone compared to modeling solely based on first principle. Further, the resulting hybrid model has improved in computation performance, compared to a pure first principle white-box model, while still fulfilling the requirements regarding accuracy of the considered hemodynamic quantities. The use of the presented techniques is explained in a general manner and the considered use-case can serve as example for other modeling and simulation applications in and beyond the medical domain.
This article proposes an open-source implementation of a phase-field model for brittle fracture using a recently developed finite element toolbox, Gridap in Julia. The present work exploits the advantages of both the phase-field model and Gridap toolbox for simulating fracture in brittle materials. On one hand, the use of the phase-field model, which is a continuum approach and uses a diffuse representation of sharp cracks, enables the proposed implementation to overcome such well-known drawbacks of the discrete approach for predicting complex crack paths as the need for re-meshing, enrichment of finite element shape functions and an explicit tracking of the crack surfaces. On the other hand, the use of Gridap makes the proposed implementation very compact and user-friendly that requires low memory usage, and provides a high degree of flexibility to the users in defining weak forms of partial differential equations. A test on a notched beam under symmetric three-point bending and a set of tests on a notched beam with three holes under asymmetric three-point bending is considered to demonstrate how the proposed Gridap based phase-field Julia code can be used to simulate fracture in brittle materials.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا