Do you want to publish a course? Click here

Thermal Control of Laser Powder Bed Fusion Using Deep Reinforcement Learning

106   0   0.0 ( 0 )
 Added by Francis Ogoke
 Publication date 2021
and research's language is English




Ask ChatGPT about the research

Powder-based additive manufacturing techniques provide tools to construct intricate structures that are difficult to manufacture using conventional methods. In Laser Powder Bed Fusion, components are built by selectively melting specific areas of the powder bed, to form the two-dimensional cross-section of the specific part. However, the high occurrence of defects impacts the adoption of this method for precision applications. Therefore, a control policy for dynamically altering process parameters to avoid phenomena that lead to defect occurrences is necessary. A Deep Reinforcement Learning (DRL) framework that derives a versatile control strategy for minimizing the likelihood of these defects is presented. The generated control policy alters the velocity of the laser during the melting process to ensure the consistency of the melt pool and reduce overheating in the generated product. The control policy is trained and validated on efficient simulations of the continuum temperature distribution of the powder bed layer under various laser trajectories.



rate research

Read More

Quality control in additive manufacturing can be achieved through variation control of the quantity of interest (QoI). We choose in this work the microstructural microsegregation to be our QoI. Microsegregation results from the spatial redistribution of a solute element across the solid-liquid interface that forms during solidification of an alloy melt pool during the laser powder bed fusion process. Since the process as well as the alloy parameters contribute to the statistical variation in microstructural features, uncertainty analysis of the QoI is essential. High-throughput phase-field simulations estimate the solid-liquid interfaces that grow for the melt pool solidification conditions that were estimated from finite element simulations. Microsegregation was determined from the simulated interfaces for different process and alloy parameters. Correlation, regression, and surrogate model analyses were used to quantify the contribution of different sources of uncertainty to the QoI variability. We found negligible contributions of thermal gradient and Gibbs-Thomson coefficient and considerable contributions of solidification velocity, liquid diffusivity, and segregation coefficient on the QoI. Cumulative distribution functions and probability density functions were used to analyze the distribution of the QoI during solidification. Our approach, for the first time, identifies the uncertainty sources and frequency densities of the QoI in the solidification regime relevant to additive manufacturing.
138 - Rui Liu , Sen Liu , Xiaoli Zhang 2021
To control part quality, it is critical to analyze pore generation mechanisms, laying theoretical foundation for future porosity control. Current porosity analysis models use machine setting parameters, such as laser angle and part pose. However, these setting-based models are machine dependent, hence they often do not transfer to analysis of porosity for a different machine. To address the first problem, a physics-informed, data-driven model (PIM), which instead of directly using machine setting parameters to predict porosity levels of printed parts, it first interprets machine settings into physical effects, such as laser energy density and laser radiation pressure. Then, these physical, machine independent effects are used to predict porosity levels according to pass, flag, fail categories instead of focusing on quantitative pore size prediction. With six learning methods evaluation, PIM proved to achieve good performances with prediction error of 10$sim$26%. Finally, pore-encouraging influence and pore-suppressing influence were analyzed for quality analysis.
The connectivity aspect of connected autonomous vehicles (CAV) is beneficial because it facilitates dissemination of traffic-related information to vehicles through Vehicle-to-External (V2X) communication. Onboard sensing equipment including LiDAR and camera can reasonably characterize the traffic environment in the immediate locality of the CAV. However, their performance is limited by their sensor range (SR). On the other hand, longer-range information is helpful for characterizing imminent conditions downstream. By contemporaneously coalescing the short- and long-range information, the CAV can construct comprehensively its surrounding environment and thereby facilitate informed, safe, and effective movement planning in the short-term (local decisions including lane change) and long-term (route choice). In this paper, we describe a Deep Reinforcement Learning based approach that integrates the data collected through sensing and connectivity capabilities from other vehicles located in the proximity of the CAV and from those located further downstream, and we use the fused data to guide lane changing, a specific context of CAV operations. In addition, recognizing the importance of the connectivity range (CR) to the performance of not only the algorithm but also of the vehicle in the actual driving environment, the paper carried out a case study. The case study demonstrates the application of the proposed algorithm and duly identifies the appropriate CR for each level of prevailing traffic density. It is expected that implementation of the algorithm in CAVs can enhance the safety and mobility associated with CAV driving operations. From a general perspective, its implementation can provide guidance to connectivity equipment manufacturers and CAV operators, regarding the default CR settings for CAVs or the recommended CR setting in a given traffic environment.
In delay-sensitive industrial internet of things (IIoT) applications, the age of information (AoI) is employed to characterize the freshness of information. Meanwhile, the emerging network function virtualization provides flexibility and agility for service providers to deliver a given network service using a sequence of virtual network functions (VNFs). However, suitable VNF placement and scheduling in these schemes is NP-hard and finding a globally optimal solution by traditional approaches is complex. Recently, deep reinforcement learning (DRL) has appeared as a viable way to solve such problems. In this paper, we first utilize single agent low-complex compound action actor-critic RL to cover both discrete and continuous actions and jointly minimize VNF cost and AoI in terms of network resources under end-to end Quality of Service constraints. To surmount the single-agent capacity limitation for learning, we then extend our solution to a multi-agent DRL scheme in which agents collaborate with each other. Simulation results demonstrate that single-agent schemes significantly outperform the greedy algorithm in terms of average network cost and AoI. Moreover, multi-agent solution decreases the average cost by dividing the tasks between the agents. However, it needs more iterations to be learned due to the requirement on the agents collaboration.
This paper presents a novel hierarchical deep reinforcement learning (DRL) based design for the voltage control of power grids. DRL agents are trained for fast, and adaptive selection of control actions such that the voltage recovery criterion can be met following disturbances. Existing voltage control techniques suffer from the issues of speed of operation, optimal coordination between different locations, and scalability. We exploit the area-wise division structure of the power system to propose a hierarchical DRL design that can be scaled to the larger grid models. We employ an enhanced augmented random search algorithm that is tailored for the voltage control problem in a two-level architecture. We train area-wise decentralized RL agents to compute lower-level policies for the individual areas, and concurrently train a higher-level DRL agent that uses the updates of the lower-level policies to efficiently coordinate the control actions taken by the lower-level agents. Numerical experiments on the IEEE benchmark 39-bus model with 3 areas demonstrate the advantages and various intricacies of the proposed hierarchical approach.

suggested questions

comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا