ترغب بنشر مسار تعليمي؟ اضغط هنا

Accurate simulation of operating system updates in neuroimaging using Monte-Carlo arithmetic

128   0   0.0 ( 0 )
 نشر من قبل Ali Salari
 تاريخ النشر 2021
والبحث باللغة English
 تأليف Ali Salari




اسأل ChatGPT حول البحث

Operating system (OS) updates introduce numerical perturbations that impact the reproducibility of computational pipelines. In neuroimaging, this has important practical implications on the validity of computational results, particularly when obtained in systems such as high-performance computing clusters where the experimenter does not control software updates. We present a framework to reproduce the variability induced by OS updates in controlled conditions. We hypothesize that OS updates impact computational pipelines mainly through numerical perturbations originating in mathematical libraries, which we simulate using Monte-Carlo arithmetic in a framework called fuzzy libmath (FL). We applied this methodology to pre-processing pipelines of the Human Connectome Project, a flagship open-data project in neuroimaging. We found that FL-perturbed pipelines accurately reproduce the variability induced by OS updates and that this similarity is only mildly dependent on simulation parameters. Importantly, we also found between-subject differences were preserved in both cases, though the between-run variability was of comparable magnitude for both FL and OS perturbations. We found the numerical precision in the HCP pre-processed images to be relatively low, with less than 8 significant bits among the 24 available, which motivates further investigation of the numerical stability of components in the tested pipeline. Overall, our results establish that FL accurately simulates results variability due to OS updates, and is a practical framework to quantify numerical uncertainty in neuroimaging.



قيم البحث

اقرأ أيضاً

163 - Linlin Xu , Giray Okten 2014
GPU computing has become popular in computational finance and many financial institutions are moving their CPU based applications to the GPU platform. Since most Monte Carlo algorithms are embarrassingly parallel, they benefit greatly from parallel i mplementations, and consequently Monte Carlo has become a focal point in GPU computing. GPU speed-up examples reported in the literature often involve Monte Carlo algorithms, and there are software tools commercially available that help migrate Monte Carlo financial pricing models to GPU. We present a survey of Monte Carlo and randomized quasi-Monte Carlo methods, and discuss existing (quasi) Monte Carlo sequences in GPU libraries. We discuss specific features of GPU architecture relevant for developing efficient (quasi) Monte Carlo methods. We introduce a recent randomized quasi-Monte Carlo method, and compare it with some of the existing implementations on GPU, when they are used in pricing caplets in the LIBOR market model and mortgage backed securities.
192 - Benjamin Jourdain 2010
Taking advantage of the recent litterature on exact simulation algorithms (Beskos, Papaspiliopoulos and Roberts) and unbiased estimation of the expectation of certain fonctional integrals (Wagner, Beskos et al. and Fearnhead et al.), we apply an exac t simulation based technique for pricing continuous arithmetic average Asian options in the Black and Scholes framework. Unlike existing Monte Carlo methods, we are no longer prone to the discretization bias resulting from the approximation of continuous time processes through discrete sampling. Numerical results of simulation studies are presented and variance reduction problems are considered.
Generative adversarial networks (GANs) have shown promising results when applied on partial differential equations and financial time series generation. We investigate if GANs can also be used to approximate one-dimensional Ito stochastic differentia l equations (SDEs). We propose a scheme that approximates the path-wise conditional distribution of SDEs for large time steps. Standard GANs are only able to approximate processes in distribution, yielding a weak approximation to the SDE. A conditional GAN architecture is proposed that enables strong approximation. We inform the discriminator of this GAN with the map between the prior input to the generator and the corresponding output samples, i.e. we introduce a `supervised GAN. We compare the input-output map obtained with the standard GAN and supervised GAN and show experimentally that the standard GAN may fail to provide a path-wise approximation. The GAN is trained on a dataset obtained with exact simulation. The architecture was tested on geometric Brownian motion (GBM) and the Cox-Ingersoll-Ross (CIR) process. The supervised GAN outperformed the Euler and Milstein schemes in strong error on a discretisation with large time steps. It also outperformed the standard conditional GAN when approximating the conditional distribution. We also demonstrate how standard GANs may give rise to non-parsimonious input-output maps that are sensitive to perturbations, which motivates the need for constraints and regularisation on GAN generators.
Large, open-source consortium datasets have spurred the development of new and increasingly powerful machine learning approaches in brain connectomics. However, one key question remains: are we capturing biologically relevant and generalizable inform ation about the brain, or are we simply overfitting to the data? To answer this, we organized a scientific challenge, the Connectomics in NeuroImaging Transfer Learning Challenge (CNI-TLC), held in conjunction with MICCAI 2019. CNI-TLC included two classification tasks: (1) diagnosis of Attention-Deficit/Hyperactivity Disorder (ADHD) within a pre-adolescent cohort; and (2) transference of the ADHD model to a related cohort of Autism Spectrum Disorder (ASD) patients with an ADHD comorbidity. In total, 240 resting-state fMRI time series averaged according to three standard parcellation atlases, along with clinical diagnosis, were released for training and validation (120 neurotypical controls and 120 ADHD). We also provided demographic information of age, sex, IQ, and handedness. A second set of 100 subjects (50 neurotypical controls, 25 ADHD, and 25 ASD with ADHD comorbidity) was used for testing. Models were submitted in a standardized format as Docker images through ChRIS, an open-source image analysis platform. Utilizing an inclusive approach, we ranked the methods based on 16 different metrics. The final rank was calculated using the rank product for each participant across all measures. Furthermore, we assessed the calibration curves of each method. Five participants submitted their model for evaluation, with one outperforming all other methods in both ADHD and ASD classification. However, further improvements are needed to reach the clinical translation of functional connectomics. We are keeping the CNI-TLC open as a publicly available resource for developing and validating new classification methodologies in the field of connectomics.
90 - Ji Qiang 2020
Monte Carlo simulations are widely used in many areas including particle accelerators. In this lecture, after a short introduction and reviewing of some statistical backgrounds, we will discuss methods such as direct inversion, rejection method, and Markov chain Monte Carlo to sample a probability distribution function, and methods for variance reduction to evaluate numerical integrals using the Monte Carlo simulation. We will also briefly introduce the quasi-Monte Carlo sampling at the end of this lecture.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا