Do you want to publish a course? Click here

PArthENoPE reloaded

51   0   0.0 ( 0 )
 Added by Ofelia Pisanti
 Publication date 2017
  fields Physics
and research's language is English




Ask ChatGPT about the research

We describe the main features of a new and updated version of the program PArthENoPE, which computes the abundances of light elements produced during Big Bang Nucleosynthesis. As the previous first release in 2008, the new one, PArthENoPE 2.0, will be soon publicly available and distributed from the code site, http://parthenope.na.infn.it. Apart from minor changes, which will be also detailed, the main improvements are as follows. The powerful, but not freely accessible, NAG routines have been substituted by ODEPACK libraries, without any significant loss in precision. Moreover, we have developed a Graphical User Interface (GUI) which allows a friendly use of the code and a simpler implementation of running for grids of input parameters. Finally, we report the results of PArthENoPE 2.0 for a minimal BBN scenario with free radiation energy density.



rate research

Read More

This paper presents the main features of a new and updated version of the program PArthENoPE, which the community has been using for many years for computing the abundances of light elements produced during Big Bang Nucleosynthesis. This is the third release of the PArthENoPE code, after the 2008 and the 2018 ones, and will be distributed from the codes website, http://parthenope.na.infn.it. Apart from minor changes, the main improvements in this new version include a revisited implementation of the nuclear rates for the most important reactions of deuterium destruction, H2(p,gamma)He3, H2(d, n)He3 and H2(d, p)H3, and a re-designed GUI, which extends the functionality of the previous one. The new GUI, in particular, supersedes the previous tools for running over grids of parameters with a better management of parallel runs, and it offers a brand-new set of functions for plotting the results.
We describe a program for computing the abundances of light elements produced during Big Bang Nucleosynthesis which is publicly available at http://parthenope.na.infn.it/. Starting from nuclear statistical equilibrium conditions the program solves the set of coupled ordinary differential equations, follows the departure from chemical equilibrium of nuclear species, and determines their asymptotic abundances as function of several input cosmological parameters as the baryon density, the number of effective neutrino, the value of cosmological constant and the neutrino chemical potential. The program requires commercial NAG library routines.
We revisit the excursion set approach to calculate void abundances in chameleon-type modified gravity theories, which was previously studied by Clampitt, Cai and Li (2013). We focus on properly accounting for the void-in-cloud effect, i.e., the growth of those voids sitting in over-dense regions may be restricted by the evolution of their surroundings. This effect may change the distribution function of voids hence affect predictions on the differences between modified gravity and GR. We show that the thin-shell approximation usually used to calculate the fifth force is qualitatively good but quantitatively inaccurate. Therefore, it is necessary to numerically solve the fifth force in both over-dense and under-dense regions. We then generalise the Eulerian void assignment method of Paranjape, Lam and Sheth (2012) to our modified gravity model. We implement this method in our Monte Carlo simulations and compare its results with the original Lagrangian methods. We find that the abundances of small voids are significantly reduced in both modified gravity and GR due to the restriction of environments. However, the change in void abundances for the range of void radii of interest for both models is similar. Therefore, the difference between models remains similar to the results from the Lagrangian method, especially if correlated steps of the random walks are used. As Clampitt, Cai and Li (2013), we find that the void abundance is much more sensitive to modified gravity than halo abundances. Our method can then be a faster alternative to N-body simulations for studying the qualitative behaviour of a broad class of theories. We also discuss the limitations and other practical issues associated with its applications.
We present a high performance solution to the Wiener filtering problem via a formulation that is dual to the recently developed messenger technique. This new dual messenger algorithm, like its predecessor, efficiently calculates the Wiener filter solution of large and complex data sets without preconditioning and can account for inhomogeneous noise distributions and arbitrary mask geometries. We demonstrate the capabilities of this scheme in signal reconstruction by applying it on a simulated cosmic microwave background (CMB) temperature data set. The performance of this new method is compared to that of the standard messenger algorithm and the preconditioned conjugate gradient (PCG) approach, using a series of well-known convergence diagnostics and their processing times, for the particular problem under consideration. This variant of the messenger algorithm matches the performance of the PCG method in terms of the effectiveness of reconstruction of the input angular power spectrum and converges smoothly to the final solution. The dual messenger algorithm outperforms the standard messenger and PCG methods in terms of execution time, as it runs to completion around 2 and 3-4 times faster than the respective methods, for the specific problem considered.
Direct photon production in hadronic collisions provides a handle on the gluon PDF by means of the QCD Compton scattering process. In this work we revisit the impact of direct photon production on a global PDF analysis, motivated by the recent availability of the next-to-next-to-leading (NNLO) calculation for this process. We demonstrate that the inclusion of NNLO QCD and leading-logarithmic electroweak corrections leads to a good quantitative agreement with the ATLAS measurements at 8 TeV and 13 TeV, except for the most forward rapidity region in the former case. By including the ATLAS 8 TeV direct photon production data in the NNPDF3.1 NNLO global analysis, we assess its impact on the medium-x gluon. We also study the constraining power of the direct photon production measurements on PDF fits based on different datasets, in particular on the NNPDF3.1 no-LHC and collider-only fits. We also present updated NNLO theoretical predictions for direct photon production at 13 TeV that include the constraints from the 8 TeV measurements.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا