ترغب بنشر مسار تعليمي؟ اضغط هنا

Self-Directed Online Machine Learning for Topology Optimization

240   0   0.0 ( 0 )
 نشر من قبل Changyu Deng
 تاريخ النشر 2020
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Topology optimization by optimally distributing materials in a given domain requires gradient-free optimizers to solve highly complicated problems. However, with hundreds of design variables or more involved, solving such problems would require millions of Finite Element Method (FEM) calculations whose computational cost is huge and impractical. Here we report Self-directed Online Learning Optimization (SOLO) which integrates Deep Neural Network (DNN) with FEM calculations. A DNN learns and substitutes the objective as a function of design variables. A small number of training data is generated dynamically based on the DNNs prediction of the global optimum. The DNN adapts to the new training data and gives better prediction in the region of interest until convergence. Our algorithm was tested by four types of problems including compliance minimization, fluid-structure optimization, heat transfer enhancement and truss optimization. It reduced the computational time by 2 ~ 5 orders of magnitude compared with directly using heuristic methods, and outperformed all state-of-the-art algorithms tested in our experiments. This approach enables solving large multi-dimensional optimization problems.

قيم البحث

اقرأ أيضاً

Topology optimization (TO) is a popular and powerful computational approach for designing novel structures, materials, and devices. Two computational challenges have limited the applicability of TO to a variety of industrial applications. First, a TO problem often involves a large number of design variables to guarantee sufficient expressive power. Second, many TO problems require a large number of expensive physical model simulations, and those simulations cannot be parallelized. To address these issues, we propose a general scalable deep-learning (DL) based TO framework, referred to as SDL-TO, which utilizes parallel schemes in high performance computing (HPC) to accelerate the TO process for designing additively manufactured (AM) materials. Unlike the existing studies of DL for TO, our framework accelerates TO by learning the iterative history data and simultaneously training on the mapping between the given design and its gradient. The surrogate gradient is learned by utilizing parallel computing on multiple CPUs incorporated with a distributed DL training on multiple GPUs. The learned TO gradient enables a fast online update scheme instead of an expensive update based on the physical simulator or solver. Using a local sampling strategy, we achieve to reduce the intrinsic high dimensionality of the design space and improve the training accuracy and the scalability of the SDL-TO framework. The method is demonstrated by benchmark examples and AM materials design for heat conduction. The proposed SDL-TO framework shows competitive performance compared to the baseline methods but significantly reduces the computational cost by a speed up of around 8.6x over the standard TO implementation.
Forecasting the movements of stock prices is one the most challenging problems in financial markets analysis. In this paper, we use Machine Learning (ML) algorithms for the prediction of future price movements using limit order book data. Two differe nt sets of features are combined and evaluated: handcrafted features based on the raw order book data and features extracted by ML algorithms, resulting in feature vectors with highly variant dimensionalities. Three classifiers are evaluated using combinations of these sets of features on two different evaluation setups and three prediction scenarios. Even though the large scale and high frequency nature of the limit order book poses several challenges, the scope of the conducted experiments and the significance of the experimental results indicate that Machine Learning highly befits this task carving the path towards future research in this field.
The optimization of porous infill structures via local volume constraints has become a popular approach in topology optimization. In some design settings, however, the iterative optimization process converges only slowly, or not at all even after sev eral hundreds or thousands of iterations. This leads to regions in which a distinct binary design is difficult to achieve. Interpreting intermediate density values by applying a threshold results in large solid or void regions, leading to sub-optimal structures. We find that this convergence issue relates to the topology of the stress tensor field that is simulated when applying the same external forces on the solid design domain. In particular, low convergence is observed in regions around so-called trisector degenerate points. Based on this observation, we propose an automatic initialization process that prescribes the topological skeleton of the stress field into the material field as solid simulation elements. These elements guide the material deposition around the degenerate points, but can also be remodelled or removed during the optimization. We demonstrate significantly improved convergence rates in a number of use cases with complex stress topologies. The improved convergence is demonstrated for infill optimization under homogeneous as well as spatially varying local volume constraints.
Failure in brittle materials led by the evolution of micro- to macro-cracks under repetitive or increasing loads is often catastrophic with no significant plasticity to advert the onset of fracture. Early failure detection with respective location ar e utterly important features in any practical application, both of which can be effectively addressed using artificial intelligence. In this paper, we develop a supervised machine learning (ML) framework to predict failure in an isothermal, linear elastic and isotropic phase-field model for damage and fatigue of brittle materials. Time-series data of the phase-field model is extracted from virtual sensing nodes at different locations of the geometry. A pattern recognition scheme is introduced to represent time-series data/sensor nodes responses as a pattern with a corresponding label, integrated with ML algorithms, used for damage classification with identified patterns. We perform an uncertainty analysis by superposing random noise to the time-series data to assess the robustness of the framework with noise-polluted data. Results indicate that the proposed framework is capable of predicting failure with acceptable accuracy even in the presence of high noise levels. The findings demonstrate satisfactory performance of the supervised ML framework, and the applicability of artificial intelligence and ML to a practical engineering problem, i.,e, data-driven failure prediction in brittle materials.
We investigate the benefits of feature selection, nonlinear modelling and online learning when forecasting in financial time series. We consider the sequential and continual learning sub-genres of online learning. The experiments we conduct show that there is a benefit to online transfer learning, in the form of radial basis function networks, beyond the sequential updating of recursive least-squares models. We show that the radial basis function networks, which make use of clustering algorithms to construct a kernel Gram matrix, are more beneficial than treating each training vector as separate basis functions, as occurs with kernel Ridge regression. We demonstrate quantitative procedures to determine the very structure of the radial basis function networks. Finally, we conduct experiments on the log returns of financial time series and show that the online learning models, particularly the radial basis function networks, are able to outperform a random walk baseline, whereas the offline learning models struggle to do so.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا