Do you want to publish a course? Click here

An Artifact-based Workflow for Finite-Element Simulation Studies

360   0   0.0 ( 0 )
 Added by Andreas Ruscheinski
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

Workflow support typically focuses on single simulation experiments. This is also the case for simulation based on finite element methods. If entire simulation studies shall be supported, flexible means for intertwining revising the model, collecting data, executing and analyzing experiments are required. Artifact-based workflows present one means to support entire simulation studies, as has been shown for stochastic discrete-event simulation. To adapt the approach to finite element methods, the set of artifacts, i.e., conceptual model, requirement, simulation model, and simulation experiment, and the constraints that apply are extended by new artifacts, such as geometrical model, input data, and simulation data. Artifacts, their life cycles, and constraints are revisited revealing features both types of simulation studies share and those they vary in. Also, the potential benefits of exploiting an artifact-based workflow approach are shown based on a concrete simulation study. To those benefits belong guidance to systematically conduct simulation studies, reduction of effort by automatically executing specific steps, e.g., generating and executing convergence tests, and support for the automatic reporting of provenance.



rate research

Read More

Finite element models without simplifying assumptions can accurately describe the spatial and temporal distribution of heat in machine tools as well as the resulting deformation. In principle, this allows to correct for displacements of the Tool Centre Point and enables high precision manufacturing. However, the computational cost of FEM models and restriction to generic algorithms in commercial tools like ANSYS prevents their operational use since simulations have to run faster than real-time. For the case where heat diffusion is slow compared to machine movement, we introduce a tailored implicit-explicit multi-rate time stepping method of higher order based on spectral deferred corrections. Using the open-source FEM library DUNE, we show that fully coupled simulations of the temperature field are possible in real-time for a machine consisting of a stock sliding up and down on rails attached to a stand.
Unfitted finite element techniques are valuable tools in different applications where the generation of body-fitted meshes is difficult. However, these techniques are prone to severe ill conditioning problems that obstruct the efficient use of iterative Krylov methods and, in consequence, hinders the practical usage of unfitted methods for realistic large scale applications. In this work, we present a technique that addresses such conditioning problems by constructing enhanced finite element spaces based on a cell aggregation technique. The presented method, called aggregated unfitted finite element method, is easy to implement, and can be used, in contrast to previous works, in Galerkin approximations of coercive problems with conforming Lagrangian finite element spaces. The mathematical analysis of the new method states that the condition number of the resulting linear system matrix scales as in standard finite elements for body-fitted meshes, without being affected by small cut cells, and that the method leads to the optimal finite element convergence order. These theoretical results are confirmed with 2D and 3D numerical experiments.
This work introduces an innovative parallel, fully-distributed finite element framework for growing geometries and its application to metal additive manufacturing. It is well-known that virtual part design and qualification in additive manufacturing requires highly-accurate multiscale and multiphysics analyses. Only high performance computing tools are able to handle such complexity in time frames compatible with time-to-market. However, efficiency, without loss of accuracy, has rarely held the centre stage in the numerical community. Here, in contrast, the framework is designed to adequately exploit the resources of high-end distributed-memory machines. It is grounded on three building blocks: (1) Hierarchical adaptive mesh refinement with octree-based meshes; (2) a parallel strategy to model the growth of the geometry; (3) state-of-the-art parallel iterative linear solvers. Computational experiments consider the heat transfer analysis at the part scale of the printing process by powder-bed technologies. After verification against a 3D benchmark, a strong-scaling analysis assesses performance and identifies major sources of parallel overhead. A third numerical example examines the efficiency and robustness of (2) in a curved 3D shape. Unprecedented parallelism and scalability were achieved in this work. Hence, this framework contributes to take on higher complexity and/or accuracy, not only of part-scale simulations of metal or polymer additive manufacturing, but also in welding, sedimentation, atherosclerosis, or any other physical problem where the physical domain of interest grows in time.
We present a novel method for finite element analysis of inelastic structures containing Shape Memory Alloys (SMAs). Phenomenological constitutive models for SMAs lead to material nonlinearities, that require substantial computational effort to resolve. Finite element analysis methods, which rely on Gauss quadrature integration schemes, must solve two sets of coupled differential equations: one at the global level and the other at the local, i.e. Gauss point level. In contrast to the conventional return mapping algorithm, which solves these two sets of coupled differential equations separately using a nested Newton procedure, we propose a scheme to solve the local and global differential equations simultaneously. In the process we also derive closed-form expressions used to update the internal state variables, and unify the popular closest-point and cutting plane methods with our formulas. Numerical testing indicates that our method allows for larger thermomechanical loading steps and provides increased computational efficiency, over the standard return mapping algorithm.
Motivation: Agent-based modeling is an indispensable tool for studying complex biological systems. However, existing simulators do not always take full advantage of modern hardware and often have a field-specific software design. Results: We present a novel simulation platform called BioDynaMo that alleviates both of these problems. BioDynaMo features a general-purpose and high-performance simulation engine. We demonstrate that BioDynaMo can be used to simulate use cases in: neuroscience, oncology, and epidemiology. For each use case we validate our findings with experimental data or an analytical solution. Our performance results show that BioDynaMo performs up to three orders of magnitude faster than the state-of-the-art baseline. This improvement makes it feasible to simulate each use case with one billion agents on a single server, showcasing the potential BioDynaMo has for computational biology research. Availability: BioDynaMo is an open-source project under the Apache 2.0 license and is available at www.biodynamo.org. Instructions to reproduce the results are available in supplementary information. Contact: [email protected], [email protected], [email protected], [email protected] Supplementary information: Available at https://doi.org/10.5281/zenodo.4501515
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا