ترغب بنشر مسار تعليمي؟ اضغط هنا

Application of Neural Network Algorithm in Propylene Distillation

68   0   0.0 ( 0 )
 نشر من قبل Jinwei Lu
 تاريخ النشر 2021
والبحث باللغة English




اسأل ChatGPT حول البحث

Artificial neural network modeling does not need to consider the mechanism. It can map the implicit relationship between input and output and predict the performance of the system well. At the same time, it has the advantages of self-learning ability and high fault tolerance. The gas-liquid two phases in the rectification tower conduct interphase heat and mass transfer through countercurrent contact. The functional relationship between the product concentration at the top and bottom of the tower and the process parameters is extremely complex. The functional relationship can be accurately controlled by artificial neural network algorithms. The key components of the propylene distillation tower are the propane concentration at the top of the tower and the propylene concentration at the bottom of the tower. Accurate measurement of them plays a key role in increasing propylene yield in ethylene production enterprises. This article mainly introduces the neural network model and its application in the propylene distillation tower.

قيم البحث

اقرأ أيضاً

88 - Ningrui Zhao , Jinwei Lu 2021
Distillation process is a complex process of conduction, mass transfer and heat conduction, which is mainly manifested as follows: The mechanism is complex and changeable with uncertainty; the process is multivariate and strong coupling; the system i s nonlinear, hysteresis and time-varying. Neural networks can perform effective learning based on corresponding samples, do not rely on fixed mechanisms, have the ability to approximate arbitrary nonlinear mappings, and can be used to establish system input and output models. The temperature system of the rectification tower has a complicated structure and high accuracy requirements. The neural network is used to control the temperature of the system, which satisfies the requirements of the production process. This article briefly describes the basic concepts and research progress of neural network and distillation tower temperature control, and systematically summarizes the application of neural network in distillation tower control, aiming to provide reference for the development of related industries.
202 - Chunli Li , Chunyu Wang 2021
Distillation is a unit operation with multiple input parameters and multiple output parameters. It is characterized by multiple variables, coupling between input parameters, and non-linear relationship with output parameters. Therefore, it is very di fficult to use traditional methods to control and optimize the distillation column. Artificial Neural Network (ANN) uses the interconnection between a large number of neurons to establish the functional relationship between input and output, thereby achieving the approximation of any non-linear mapping. ANN is used for the control and optimization of distillation tower, with short response time, good dynamic performance, strong robustness, and strong ability to adapt to changes in the control environment. This article will mainly introduce the research progress of ANN and its application in the modeling, control and optimization of distillation towers.
One hidden yet important issue for developing neural network potentials (NNPs) is the choice of training algorithm. Here we compare the performance of two popular training algorithms, the adaptive moment estimation algorithm (Adam) and the extended K alman filter algorithm (EKF), using the Behler-Parrinello neural network (BPNN) and two publicly accessible datasets of liquid water. It is found that NNPs trained with EKF are more transferable and less sensitive to the value of the learning rate, as compared to Adam. In both cases, error metrics of the test set do not always serve as a good indicator for the actual performance of NNPs. Instead, we show that their performance correlates well with a Fisher information based similarity measure.
In real applications, different computation-resource devices need different-depth networks (e.g., ResNet-18/34/50) with high-accuracy. Usually, existing methods either design multiple networks and train them independently, or construct depth-level/wi dth-level dynamic neural networks which is hard to prove the accuracy of each sub-net. In this article, we propose an elegant Depth-Level Dynamic Neural Network (DDNN) integrated different-depth sub-nets of similar architectures. To improve the generalization of sub-nets, we design the Embedded-Knowledge-Distillation (EKD) training mechanism for the DDNN to implement knowledge transfer from the teacher (full-net) to multiple students (sub-nets). Specifically, the Kullback-Leibler (KL) divergence is introduced to constrain the posterior class probability consistency between full-net and sub-nets, and self-attention distillation on the same resolution feature of different depth is addressed to drive more abundant feature representations of sub-nets. Thus, we can obtain multiple high-accuracy sub-nets simultaneously in a DDNN via the online knowledge distillation in each training iteration without extra computation cost. Extensive experiments on CIFAR-10/100, and ImageNet datasets demonstrate that sub-nets in DDNN with EKD training achieve better performance than individually training networks while preserving the original performance of full-nets.
The rise of machine learning as a service and model sharing platforms has raised the need of traitor-tracing the models and proof of authorship. Watermarking technique is the main component of existing methods for protecting copyright of models. In t his paper, we show that distillation, a widely used transformation technique, is a quite effective attack to remove watermark embedded by existing algorithms. The fragility is due to the fact that distillation does not retain the watermark embedded in the model that is redundant and independent to the main learning task. We design ingrain in response to the destructive distillation. It regularizes a neural network with an ingrainer model, which contains the watermark, and forces the model to also represent the knowledge of the ingrainer. Our extensive evaluations show that ingrain is more robust to distillation attack and its robustness against other widely used transformation techniques is comparable to existing methods.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا