ترغب بنشر مسار تعليمي؟ اضغط هنا

SHARP: a distributed, GPU-based ptychographic solver

46   0   0.0 ( 0 )
 نشر من قبل Stefano Marchesini
 تاريخ النشر 2016
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

Ever brighter light sources, fast parallel detectors, and advances in phase retrieval methods, have made ptychography a practical and popular imaging technique. Compared to previous techniques, ptychography provides superior robustness and resolution at the expense of more advanced and time consuming data analysis. By taking advantage of massively parallel architectures, high-throughput processing can expedite this analysis and provide microscopists with immediate feedback. These advances allow real-time imaging at wavelength limited resolution, coupled with a large field of view. Here, we introduce a set of algorithmic and computational methodologies used at the Advanced Light Source, and DOE light sources packaged as a CUDA based software environment named SHARP (http://camera.lbl.gov/sharp), aimed at providing state-of-the-art high-throughput ptychography reconstructions for the coming era of diffraction limited light sources.



قيم البحث

اقرأ أيضاً

79 - Xuefeng Ding 2018
texttt{GooStats} is a software framework that provides a flexible environment and common tools to implement multi-variate statistical analysis. The framework is built upon the texttt{CERN ROOT}, texttt{MINUIT} and texttt{GooFit} packages. Running a m ulti-variate analysis in parallel on graphics processing units yields a huge boost in performance and opens new possibilities. The design and benchmark of texttt{GooStats} are presented in this article along with illustration of its application to statistical problems.
Oscillation probability calculations are becoming increasingly CPU intensive in modern neutrino oscillation analyses. The independency of reweighting individual events in a Monte Carlo sample lends itself to parallel implementation on a Graphics Proc essing Unit. The library Prob3++ was ported to the GPU using the CUDA C API, allowing for large scale parallelized calculations of neutrino oscillation probabilities through matter of constant density, decreasing the execution time by a factor of 75, when compared to performance on a single CPU.
We present a convex-concave reformulation of the reversible Markov chain estimation problem and outline an efficient numerical scheme for the solution of the resulting problem based on a primal-dual interior point method for monotone variational ineq ualities. Extensions to situations in which information about the stationary vector is available can also be solved via the convex- concave reformulation. The method can be generalized and applied to the discrete transition matrix reweighting analysis method to perform inference from independent chains with specified couplings between the stationary probabilities. The proposed approach offers a significant speed-up compared to a fixed-point iteration for a number of relevant applications.
197 - Ian Fisk 2010
In this presentation the experiences of the LHC experiments using grid computing were presented with a focus on experience with distributed analysis. After many years of development, preparation, exercises, and validation the LHC (Large Hadron Collid er) experiments are in operations. The computing infrastructure has been heavily utilized in the first 6 months of data collection. The general experience of exploiting the grid infrastructure for organized processing and preparation is described, as well as the successes employing the infrastructure for distributed analysis. At the end the expected evolution and future plans are outlined.
Keywords in scientific articles have found their significance in information filtering and classification. In this article, we empirically investigated statistical characteristics and evolutionary properties of keywords in a very famous journal, name ly Proceedings of the National Academy of Science of the United States of America (PNAS), including frequency distribution, temporal scaling behavior, and decay factor. The empirical results indicate that the keyword frequency in PNAS approximately follows a Zipfs law with exponent 0.86. In addition, there is a power-low correlation between the cumulative number of distinct keywords and the cumulative number of keyword occurrences. Extensive empirical analysis on some other journals data is also presented, with decaying trends of most popular keywords being monitored. Interestingly, top journals from various subjects share very similar decaying tendency, while the journals of low impact factors exhibit completely different behavior. Those empirical characters may shed some light on the in-depth understanding of semantic evolutionary behaviors. In addition, the analysis of keyword-based system is helpful for the design of corresponding recommender systems.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا