Do you want to publish a course? Click here

Fusion of laser diffraction and chord length distribution data for estimation of particle size distribution using multi-objective optimisation

156   0   0.0 ( 0 )
 Publication date 2018
  fields Physics
and research's language is English




Ask ChatGPT about the research

The in situ measurement of the particle size distribution (PSD) of a suspension of particles presents huge challenges. Various effects from the process could introduce noise to the data from which the PSD is estimated. This in turn could lead to the occurrence of artificial peaks in the estimated PSD. Limitations in the models used in the PSD estimation could also lead to the occurrence of these artificial peaks. This could pose a significant challenge to in situ monitoring of particulate processes, as there will be no independent estimate of the PSD to allow a discrimination of the artificial peaks to be carried out. Here, we present an algorithm which is capable of discriminating between artificial and true peaks in PSD estimates based on fusion of multiple data streams. In this case, chord length distribution and laser diffraction data have been used. The data fusion is done by means of multi-objective optimisation using the weighted sum approach. The algorithm is applied to two different particle suspensions. The estimated PSDs from the algorithm are compared with offline estimates of PSD from the Malvern Mastersizer and Morphologi G3. The results show that the algorithm is capable of eliminating an artificial peak in a PSD estimate when this artificial peak is sufficiently displaced from the true peak. However, when the artificial peak is too close to the true peak, it is only suppressed but not completely eliminated.



rate research

Read More

Application of the multi-objective particle swarm optimisation (MOPSO) algorithm to design of water distribution systems is described. An earlier MOPSO algorithm is augmented with (a) local search, (b) a modified strategy for assigning the leader, and (c) a modified mutation scheme. For one of the benchmark problems described in the literature, the effect of each of the above features on the algorithm performance is demonstrated. The augmented MOPSO algorithm (called MOPSO+) is applied to five benchmark problems, and in each case, it finds non-dominated solutions not reported earlier. In addition, for the purpose of comparing Pareto fronts (sets of non-dominated solutions) obtained by different algorithms, a new criterion is suggested, and its usefulness is pointed out with an example. Finally, some suggestions regarding future research directions are made.
Monte-Carlo (MC) methods, based on random updates and the trial-and-error principle, are well suited to retrieve particle size distributions from small-angle scattering patterns of dilute solutions of scatterers. The size sensitivity of size determination methods in relation to the range of scattering vectors covered by the data is discussed. Improvements are presented to existing MC methods in which the particle shape is assumed to be known. A discussion of the problems with the ambiguous convergence criteria of the MC methods are given and a convergence criterion is proposed, which also allows the determination of uncertainties on the determined size distributions.
We present the first world-wide inter-laboratory comparison of small-angle X-ray scattering (SAXS) for nanoparticle sizing. The measurands in this comparison are the mean particle radius, the width of the size distribution and the particle concentration. The investigated sample consists of dispersed silver nanoparticles, surrounded by a stabilizing polymeric shell of poly(acrylic acid). The silver cores dominate the X-ray scattering pattern, leading to the determination of their radii size distribution using: i) Glatters Indirect Fourier Transformation method, ii) classical model fitting using SASfit and iii) a Monte Carlo fitting approach using McSAS. The application of these three methods to the collected datasets produces consistent mean number- and volume-weighted core radii of R$_n$ = 2.76 nm and R$_v$ = 3.20 nm, respectively. The corresponding widths of the log-normal radii distribution of the particles were $sigma_n$ = 0.65 nm and $sigma_v$ = 0.71 nm. The particle concentration determined using this method was 3.00 $pm$ 0.38 g/L (4.20 $pm$ 0.73 $times$ 10$^{-6}$ mol/L). We show that the results are slightly biased by the choice of data evaluation procedure, but that no substantial differences were found between the results from data measured on a very wide range of instruments: the participating laboratories at synchrotron SAXS beamlines, commercial and home-made instruments were all able to provide data of high quality. Our results demonstrate that SAXS is a qualified method for revealing particle size distributions in the sub-20 nm region (at least), out of reach for most other analytical methods.
In the last years, researchers have realized the difficulties of fitting power-law distributions properly. These difficulties are higher in Zipfs systems, due to the discreteness of the variables and to the existence of two representations for these systems, i.e., t
Performance and energy are the two most important objectives for optimisation on modern parallel platforms. Latest research demonstrated the importance of workload distribution as a decision variable in the bi-objective optimisation for performance and energy on homogeneous multicore clusters. We show in this work that bi-objective optimisation for performance and energy on heterogeneous processors results in a large number of Pareto-optimal optimal solutions (workload distributions) even in the simple case of linear performance and energy profiles. We then study performance and energy profiles of real-life data-parallel applications and find that their shapes are non-linear, complex and non-smooth. We, therefore, propose an efficient and exact global optimisation algorithm, which takes as an input most general discrete performance and dynamic energy profiles of the heterogeneous processors and solves the bi-objective optimisation problem. The algorithm is also used as a building block to solve the bi-objective optimisation problem for performance and total energy. We also propose a novel methodology to build discrete dynamic energy profiles of individual computing devices, which are input to the algorithm. The methodology is based purely on system-level measurements and addresses the fundamental challenge of accurate component-level energy modelling of a hybrid data-parallel application running on a heterogeneous platform integrating CPUs and accelerators. We experimentally validate the proposed method using two data-parallel applications, matrix multiplication and 2D fast Fourier transform (2D-FFT).
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا