ترغب بنشر مسار تعليمي؟ اضغط هنا

An Efficient Gradient Projection Method for Structural Topology Optimization

87   0   0.0 ( 0 )
 نشر من قبل Zhi Zeng
 تاريخ النشر 2020
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

This paper presents an efficient gradient projection-based method for structural topological optimization problems characterized by a nonlinear objective function which is minimized over a feasible region defined by bilateral bounds and a single linear equality constraint. The specialty of the constraints type, as well as heuristic engineering experiences are exploited to improve the scaling scheme, projection, and searching step. In detail, gradient clipping and a modified projection of searching direction under certain condition are utilized to facilitate the efficiency of the proposed method. Besides, an analytical solution is proposed to approximate this projection with negligible computation and memory costs. Furthermore, the calculation of searching steps is largely simplified. Benchmark problems, including the MBB, the force inverter mechanism, and the 3D cantilever beam are used to validate the effectiveness of the method. The proposed method is implemented in MATLAB which is open-sourced for educational usage.



قيم البحث

اقرأ أيضاً

A topology optimization method is presented for the design of periodic microstructured materials with prescribed homogenized nonlinear constitutive properties over finite strain ranges. The mechanical model assumes linear elastic isotropic materials, geometric nonlinearity at finite strain, and a quasi-static response. The optimization problem is solved by a nonlinear programming method and the sensitivities computed via the adjoint method. Two-dimensional structures identified using this optimization method are additively manufactured and their uniaxial tensile strain response compared with the numerically predicted behavior. The optimization approach herein enables the design and development of lattice-like materials with prescribed nonlinear effective properties, for use in myriad potential applications, ranging from stress wave and vibration mitigation to soft robotics.
This paper focuses on projection-free methods for solving smooth Online Convex Optimization (OCO) problems. Existing projection-free methods either achieve suboptimal regret bounds or have high per-iteration computational costs. To fill this gap, two efficient projection-free online methods called ORGFW and MORGFW are proposed for solving stochastic and adversarial OCO problems, respectively. By employing a recursive gradient estimator, our methods achieve optimal regret bounds (up to a logarithmic factor) while possessing low per-iteration computational costs. Experimental results demonstrate the efficiency of the proposed methods compared to state-of-the-arts.
For the minimization of a nonlinear cost functional $j$ under convex constraints the relaxed projected gradient process $varphi_{k+1} = varphi_{k} + alpha_k(P_H(varphi_{k}-lambda_k abla_H j(varphi_{k}))-varphi_{k})$ is a well known method. The analy sis is classically performed in a Hilbert space $H$. We generalize this method to functionals $j$ which are differentiable in a Banach space. Thus it is possible to perform e.g. an $L^2$ gradient method if $j$ is only differentiable in $L^infty$. We show global convergence using Armijo backtracking in $alpha_k$ and allow the inner product and the scaling $lambda_k$ to change in every iteration. As application we present a structural topology optimization problem based on a phase field model, where the reduced cost functional $j$ is differentiable in $H^1cap L^infty$. The presented numerical results using the $H^1$ inner product and a pointwise chosen metric including second order information show the expected mesh independency in the iteration numbers. The latter yields an additional, drastic decrease in iteration numbers as well as in computation time. Moreover we present numerical results using a BFGS update of the $H^1$ inner product for further optimization problems based on phase field models.
The optimization of porous infill structures via local volume constraints has become a popular approach in topology optimization. In some design settings, however, the iterative optimization process converges only slowly, or not at all even after sev eral hundreds or thousands of iterations. This leads to regions in which a distinct binary design is difficult to achieve. Interpreting intermediate density values by applying a threshold results in large solid or void regions, leading to sub-optimal structures. We find that this convergence issue relates to the topology of the stress tensor field that is simulated when applying the same external forces on the solid design domain. In particular, low convergence is observed in regions around so-called trisector degenerate points. Based on this observation, we propose an automatic initialization process that prescribes the topological skeleton of the stress field into the material field as solid simulation elements. These elements guide the material deposition around the degenerate points, but can also be remodelled or removed during the optimization. We demonstrate significantly improved convergence rates in a number of use cases with complex stress topologies. The improved convergence is demonstrated for infill optimization under homogeneous as well as spatially varying local volume constraints.
Topology optimization by optimally distributing materials in a given domain requires gradient-free optimizers to solve highly complicated problems. However, with hundreds of design variables or more involved, solving such problems would require milli ons of Finite Element Method (FEM) calculations whose computational cost is huge and impractical. Here we report Self-directed Online Learning Optimization (SOLO) which integrates Deep Neural Network (DNN) with FEM calculations. A DNN learns and substitutes the objective as a function of design variables. A small number of training data is generated dynamically based on the DNNs prediction of the global optimum. The DNN adapts to the new training data and gives better prediction in the region of interest until convergence. Our algorithm was tested by four types of problems including compliance minimization, fluid-structure optimization, heat transfer enhancement and truss optimization. It reduced the computational time by 2 ~ 5 orders of magnitude compared with directly using heuristic methods, and outperformed all state-of-the-art algorithms tested in our experiments. This approach enables solving large multi-dimensional optimization problems.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا