ترغب بنشر مسار تعليمي؟ اضغط هنا

Application of Genetic Algorithm for More Efficient Multi-Layer Thickness Optimization in Solar Cells

51   0   0.0 ( 0 )
 نشر من قبل Gwenaelle Cunha Sergio
 تاريخ النشر 2019
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Thin-film solar cells are predominately designed similar to a stacked structure. Optimizing the layer thicknesses in this stack structure is crucial to extract the best efficiency of the solar cell. The commonplace method used in optimization simulations, such as for optimizing the optical spacer layers thicknesses, is the parameter sweep. Our simulation study shows that the implementation of a meta-heuristic method like the genetic algorithm results in a significantly faster and accurate search method when compared to the brute-force parameter sweep method in both single and multi-layer optimization. While other sweep methods can also outperform the brute-force method, they do not consistently exhibit $100%$ accuracy in the optimized results like our genetic algorithm. We have used a well-studied P3HT-based structure to test our algorithm. Our best-case scenario was observed to use $60.84%$ fewer simulations than the brute-force method.



قيم البحث

اقرأ أيضاً

Hyperparameter optimization is a challenging problem in developing deep neural networks. Decision of transfer layers and trainable layers is a major task for design of the transfer convolutional neural networks (CNN). Conventional transfer CNN models are usually manually designed based on intuition. In this paper, a genetic algorithm is applied to select trainable layers of the transfer model. The filter criterion is constructed by accuracy and the counts of the trainable layers. The results show that the method is competent in this task. The system will converge with a precision of 97% in the classification of Cats and Dogs datasets, in no more than 15 generations. Moreover, backward inference according the results of the genetic algorithm shows that our method can capture the gradient features in network layers, which plays a part on understanding of the transfer AI models.
53 - W. B. Langdon 2020
C++ code snippets from a multi-core parallel memory-efficient crossover for genetic programming are given. They may be adapted for separate generation evolutionary algorithms where large chromosomes or small RAM require no more than M + (2 times nthreads) simultaneously active individuals.
155 - Ke Li , Renzhi Chen , Guangtao Fu 2017
When solving constrained multi-objective optimization problems, an important issue is how to balance convergence, diversity and feasibility simultaneously. To address this issue, this paper proposes a parameter-free constraint handling technique, two -archive evolutionary algorithm, for constrained multi-objective optimization. It maintains two co-evolving populations simultaneously: one, denoted as convergence archive, is the driving force to push the population toward the Pareto front; the other one, denoted as diversity archive, mainly tends to maintain the population diversity. In particular, to complement the behavior of the convergence archive and provide as much diversified information as possible, the diversity archive aims at exploring areas under-exploited by the convergence archive including the infeasible regions. To leverage the complementary effects of both archives, we develop a restricted mating selection mechanism that adaptively chooses appropriate mating parents from them according to their evolution status. Comprehensive experiments on a series of benchmark problems and a real-world case study fully demonstrate the competitiveness of our proposed algorithm, comparing to five state-of-the-art constrained evolutionary multi-objective optimizers.
Standard neural network based on general back propagation learning using delta method or gradient descent method has some great faults like poor optimization of error-weight objective function, low learning rate, instability .This paper introduces a hybrid supervised back propagation learning algorithm which uses trust-region method of unconstrained optimization of the error objective function by using quasi-newton method .This optimization leads to more accurate weight update system for minimizing the learning error during learning phase of multi-layer perceptron.[13][14][15] In this paper augmented line search is used for finding points which satisfies Wolfe condition. In this paper, This hybrid back propagation algorithm has strong global convergence properties & is robust & efficient in practice.
78 - Noe Casas 2015
In this article we provide a comprehensive review of the different evolutionary algorithm techniques used to address multimodal optimization problems, classifying them according to the nature of their approach. On the one hand there are algorithms th at address the issue of the early convergence to a local optimum by differentiating the individuals of the population into groups and limiting their interaction, hence having each group evolve with a high degree of independence. On the other hand other approaches are based on directly addressing the lack of genetic diversity of the population by introducing elements into the evolutionary dynamics that promote new niches of the genotypical space to be explored. Finally, we study multi-objective optimization genetic algorithms, that handle the situations where multiple criteria have to be satisfied with no penalty for any of them. Very rich literature has arised over the years on these topics, and we aim at offering an overview of the most important techniques of each branch of the field.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا