Do you want to publish a course? Click here

A Novel Topology Optimization Approach using Conditional Deep Learning

102   0   0.0 ( 0 )
 Added by Herman Shen
 Publication date 2019
and research's language is English




Ask ChatGPT about the research

In this study, a novel topology optimization approach based on conditional Wasserstein generative adversarial networks (CWGAN) is developed to replicate the conventional topology optimization algorithms in an extremely computationally inexpensive way. CWGAN consists of a generator and a discriminator, both of which are deep convolutional neural networks (CNN). The limited samples of data, quasi-optimal planar structures, needed for training purposes are generated using the conventional topology optimization algorithms. With CWGANs, the topology optimization conditions can be set to a required value before generating samples. CWGAN truncates the global design space by introducing an equality constraint by the designer. The results are validated by generating an optimized planar structure using the conventional algorithms with the same settings. A proof of concept is presented which is known to be the first such illustration of fusion of CWGANs and topology optimization.



rate research

Read More

Topology design optimization offers tremendous opportunity in design and manufacturing freedoms by designing and producing a part from the ground-up without a meaningful initial design as required by conventional shape design optimization approaches. Ideally, with adequate problem statements, to formulate and solve the topology design problem using a standard topology optimization process, such as SIMP (Simplified Isotropic Material with Penalization) is possible. In reality, an estimated over thousands of design iterations is often required for just a few design variables, the conventional optimization approach is in general impractical or computationally unachievable for real world applications significantly diluting the development of the topology optimization technology. There is, therefore, a need for a different approach that will be able to optimize the initial design topology effectively and rapidly. Therefore, this work presents a new topology design procedure to generate optimal structures using an integrated Generative Adversarial Networks (GANs) and convolutional neural network architecture.
Performance of machine learning algorithms depends critically on identifying a good set of hyperparameters. While recent approaches use Bayesian optimization to adaptively select configurations, we focus on speeding up random search through adaptive resource allocation and early-stopping. We formulate hyperparameter optimization as a pure-exploration non-stochastic infinite-armed bandit problem where a predefined resource like iterations, data samples, or features is allocated to randomly sampled configurations. We introduce a novel algorithm, Hyperband, for this framework and analyze its theoretical properties, providing several desirable guarantees. Furthermore, we compare Hyperband with popular Bayesian optimization methods on a suite of hyperparameter optimization problems. We observe that Hyperband can provide over an order-of-magnitude speedup over our competitor set on a variety of deep-learning and kernel-based learning problems.
Topology optimization has emerged as a popular approach to refine a components design and increasing its performance. However, current state-of-the-art topology optimization frameworks are compute-intensive, mainly due to multiple finite element analysis iterations required to evaluate the components performance during the optimization process. Recently, machine learning-based topology optimization methods have been explored by researchers to alleviate this issue. However, previous approaches have mainly been demonstrated on simple two-dimensional applications with low-resolution geometry. Further, current approaches are based on a single machine learning model for end-to-end prediction, which requires a large dataset for training. These challenges make it non-trivial to extend the current approaches to higher resolutions. In this paper, we explore a deep learning-based framework for performing topology optimization for three-dimensional geometries with a reasonably fine (high) resolution. We are able to achieve this by training multiple networks, each trying to learn a different aspect of the overall topology optimization methodology. We demonstrate the application of our framework on both 2D and 3D geometries. The results show that our approach predicts the final optimized design better than current ML-based topology optimization methods.
Automatic evaluation of the goodness of Generative Adversarial Networks (GANs) has been a challenge for the field of machine learning. In this work, we propose a distance complementary to existing measures: Topology Distance (TD), the main idea behind which is to compare the geometric and topological features of the latent manifold of real data with those of generated data. More specifically, we build Vietoris-Rips complex on image features, and define TD based on the differences in persistent-homology groups of the two manifolds. We compare TD with the most commonly used and relevant measures in the field, including Inception Score (IS), Frechet Inception Distance (FID), Kernel Inception Distance (KID) and Geometry Score (GS), in a range of experiments on various datasets. We demonstrate the unique advantage and superiority of our proposed approach over the aforementioned metrics. A combination of our empirical results and the theoretical argument we propose in favour of TD, strongly supports the claim that TD is a powerful candidate metric that researchers can employ when aiming to automatically evaluate the goodness of GANs learning.
We revisit the initialization of deep residual networks (ResNets) by introducing a novel analytical tool in free probability to the community of deep learning. This tool deals with non-Hermitian random matrices, rather than their conventional Hermitian counterparts in the literature. As a consequence, this new tool enables us to evaluate the singular value spectrum of the input-output Jacobian of a fully-connected deep ResNet for both linear and nonlinear cases. With the powerful tool of free probability, we conduct an asymptotic analysis of the spectrum on the single-layer case, and then extend this analysis to the multi-layer case of an arbitrary number of layers. In particular, we propose to rescale the classical random initialization by the number of residual units, so that the spectrum has the order of $O(1)$, when compared with the large width and depth of the network. We empirically demonstrate that the proposed initialization scheme learns at a speed of orders of magnitudes faster than the classical ones, and thus attests a strong practical relevance of this investigation.

suggested questions

comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا