Do you want to publish a course? Click here

Nanoparticle Size Distribution Quantification: Results of a SAXS Inter-Laboratory Comparison

103   0   0.0 ( 0 )
 Added by Brian Pauw
 Publication date 2017
  fields Physics
and research's language is English




Ask ChatGPT about the research

We present the first world-wide inter-laboratory comparison of small-angle X-ray scattering (SAXS) for nanoparticle sizing. The measurands in this comparison are the mean particle radius, the width of the size distribution and the particle concentration. The investigated sample consists of dispersed silver nanoparticles, surrounded by a stabilizing polymeric shell of poly(acrylic acid). The silver cores dominate the X-ray scattering pattern, leading to the determination of their radii size distribution using: i) Glatters Indirect Fourier Transformation method, ii) classical model fitting using SASfit and iii) a Monte Carlo fitting approach using McSAS. The application of these three methods to the collected datasets produces consistent mean number- and volume-weighted core radii of R$_n$ = 2.76 nm and R$_v$ = 3.20 nm, respectively. The corresponding widths of the log-normal radii distribution of the particles were $sigma_n$ = 0.65 nm and $sigma_v$ = 0.71 nm. The particle concentration determined using this method was 3.00 $pm$ 0.38 g/L (4.20 $pm$ 0.73 $times$ 10$^{-6}$ mol/L). We show that the results are slightly biased by the choice of data evaluation procedure, but that no substantial differences were found between the results from data measured on a very wide range of instruments: the participating laboratories at synchrotron SAXS beamlines, commercial and home-made instruments were all able to provide data of high quality. Our results demonstrate that SAXS is a qualified method for revealing particle size distributions in the sub-20 nm region (at least), out of reach for most other analytical methods.



rate research

Read More

Monte-Carlo (MC) methods, based on random updates and the trial-and-error principle, are well suited to retrieve particle size distributions from small-angle scattering patterns of dilute solutions of scatterers. The size sensitivity of size determination methods in relation to the range of scattering vectors covered by the data is discussed. Improvements are presented to existing MC methods in which the particle shape is assumed to be known. A discussion of the problems with the ambiguous convergence criteria of the MC methods are given and a convergence criterion is proposed, which also allows the determination of uncertainties on the determined size distributions.
The in situ measurement of the particle size distribution (PSD) of a suspension of particles presents huge challenges. Various effects from the process could introduce noise to the data from which the PSD is estimated. This in turn could lead to the occurrence of artificial peaks in the estimated PSD. Limitations in the models used in the PSD estimation could also lead to the occurrence of these artificial peaks. This could pose a significant challenge to in situ monitoring of particulate processes, as there will be no independent estimate of the PSD to allow a discrimination of the artificial peaks to be carried out. Here, we present an algorithm which is capable of discriminating between artificial and true peaks in PSD estimates based on fusion of multiple data streams. In this case, chord length distribution and laser diffraction data have been used. The data fusion is done by means of multi-objective optimisation using the weighted sum approach. The algorithm is applied to two different particle suspensions. The estimated PSDs from the algorithm are compared with offline estimates of PSD from the Malvern Mastersizer and Morphologi G3. The results show that the algorithm is capable of eliminating an artificial peak in a PSD estimate when this artificial peak is sufficiently displaced from the true peak. However, when the artificial peak is too close to the true peak, it is only suppressed but not completely eliminated.
In the last years, researchers have realized the difficulties of fitting power-law distributions properly. These difficulties are higher in Zipfs systems, due to the discreteness of the variables and to the existence of two representations for these systems, i.e., t
Recent studies have shown that a system composed from several randomly interdependent networks is extremely vulnerable to random failure. However, real interdependent networks are usually not randomly interdependent, rather a pair of dependent nodes are coupled according to some regularity which we coin inter-similarity. For example, we study a system composed from an interdependent world wide port network and a world wide airport network and show that well connected ports tend to couple with well connected airports. We introduce two quantities for measuring the level of inter-similarity between networks (i) Inter degree-degree correlation (IDDC) (ii) Inter-clustering coefficient (ICC). We then show both by simulation models and by analyzing the port-airport system that as the networks become more inter-similar the system becomes significantly more robust to random failure.
Deviations from Brownian motion leading to anomalous diffusion are ubiquitously found in transport dynamics, playing a crucial role in phenomena from quantum physics to life sciences. The detection and characterization of anomalous diffusion from the measurement of an individual trajectory are challenging tasks, which traditionally rely on calculating the mean squared displacement of the trajectory. However, this approach breaks down for cases of important practical interest, e.g., short or noisy trajectories, ensembles of heterogeneous trajectories, or non-ergodic processes. Recently, several new approaches have been proposed, mostly building on the ongoing machine-learning revolution. Aiming to perform an objective comparison of methods, we gathered the community and organized an open competition, the Anomalous Diffusion challenge (AnDi). Participating teams independently applied their own algorithms to a commonly-defined dataset including diverse conditions. Although no single method performed best across all scenarios, the results revealed clear differences between the various approaches, providing practical advice for users and a benchmark for developers.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا