ترغب بنشر مسار تعليمي؟ اضغط هنا

Azimuthal jet flavor tomography with CUJET2.0 of nuclear collisions at RHIC and LHC

116   0   0.0 ( 0 )
 نشر من قبل Jiechen Xu
 تاريخ النشر 2014
  مجال البحث
والبحث باللغة English




اسأل ChatGPT حول البحث

A perturbative QCD based jet tomographic Monte Carlo model, CUJET2.0, is presented to predict jet quenching observables in relativistic heavy ion collisions at RHIC/BNL and LHC/CERN energies. This model generalizes the DGLV theory of flavor dependent radiative energy loss by including multi-scale running strong coupling effects. It generalizes CUJET1.0 by computing jet path integrations though more realistic 2+1D transverse and longitudinally expanding viscous hydrodynamical fields contrained by fits to low $p_T$ flow data. The CUJET2.0 output depends on three control parameters, $(alpha_{max},f_E,f_M)$, corresponding to an assumed upper bound on the vacuum running coupling in the infrared and two chromo-electric and magnetic QGP screening mass scales $(f_E mu(T), f_M mu(T))$ where $mu(T)$ is the 1-loop Debye mass. We compare numerical results as a function of $alpha_{max}$ for pure and deformed HTL dynamically enhanced scattering cases corresponding to $(f_E=1,2, f_M=0)$ to data of the nuclear modification factor, $R^f_{AA}(p_T,phi; sqrt{s}, b)$ for jet fragment flavors $f=pi,D, B, e$ at $sqrt{s}=0.2-2.76$ ATeV c.m. energies per nucleon pair and with impact parameter $b=2.4, 7.5$ fm. A $chi^2$ analysis is presented and shows that $R^pi_{AA}$ data from RHIC and LHC are consistent with CUJET2.0 at the $chi^2/d.o.f< 2$ level for $alpha_{max}=0.23-0.30$. The corresponding $hat{q}(E_{jet}, T)/T^3$ effective jet transport coefficient field of this model is computed to facilitate comparison to other jet tomographic models in the literature. The predicted elliptic asymmetry, $v_2(p_T;sqrt{s},b)$ is, however, found to significantly underestimated relative to RHIC and LHC data. We find the $chi^2_{v_2}$ analysis shows that $v_2$ is very sensitive to allowing even as little as 10% variations of the path averaged $alpha_{max}$ along in and out of reaction plane paths.



قيم البحث

اقرأ أيضاً

127 - R. Vogt 2018
Background: It has been proposed that the azimuthal distributions of heavy flavor quark-antiquark pairs may be modified in the medium of a heavy-ion collision. Purpose: This work tests this proposition through next-to-leading order (NLO) calculations of the azimuthal distribution, $dsigma/dphi$, including transverse momentum broadening, employing $<k_T^2>$ and fragmentation in exclusive $Q bar Q$ pair production. While these studies were done for $p+p$, $p + bar p$ and $p+$Pb collisions, understanding azimuthal angle correlations between heavy quarks in these smaller, colder systems is important for their interpretation in heavy-ion collisions. Methods: First, single inclusive $p_T$ distributions calculated with the exclusive HVQMNR code are compared to those calculated in the fixed-order next-to-leading logarithm approach. Next the azimuthal distributions are calculated and sensitivities to $<k_T^2>$, $p_T$ cut, and rapidity are studied at $sqrt{s} = 7$ TeV. Finally, calculations are compared to $Q bar Q$ data in elementary $p+p$ and $p + bar p$ collisions at $sqrt{s} = 7$ TeV and 1.96 TeV as well as to the nuclear modification factor $R_{p {rm Pb}}(p_T)$ in $p+$Pb collisions at $sqrt{s_{NN}} = 5.02$ TeV measured by ALICE. Results: The low $p_T$ ($p_T < 10$ GeV) azimuthal distributions are very sensitive to the $k_T$ broadening and rather insensitive to the fragmentation function. The NLO contributions can result in an enhancement at $phi sim 0$ absent any other effects. Agreement with the data was found to be good. Conclusions: The NLO calculations, assuming collinear factorization and introducing $k_T$ broadening, result in significant modifications of the azimuthal distribution at low $p_T$ which must be taken into account in calculations of these distributions in heavy-ion collisions.
283 - M. Monteno 2011
The stochastic dynamics of c and b quarks in the fireball created in nucleus-nucleus collisions at RHIC and LHC is studied employing a relativistic Langevin equation, based on a picture of multiple uncorrelated random collisions with the medium. Heav y-quark transport coefficients are evaluated within a pQCD approach, with a proper HTL resummation of medium effects for soft scatterings. The Langevin equation is embedded in a multi-step setup developed to study heavy-flavor observables in pp and AA collisions, starting from a NLO pQCD calculation of initial heavy-quark yields, complemented in the nuclear case by shadowing corrections, k_T-broadening and nuclear geometry effects. Then, only for AA collisions, the Langevin equation is solved numerically in a background medium described by relativistic hydrodynamics. Finally, the propagated heavy quarks are made hadronize and decay into electrons. Results for the nuclear modification factor R_AA of heavy-flavor hadrons and electrons from their semi-leptonic decays are provided, both for RHIC and LHC beam energies.
Global perturbative QCD analyses, based on large data sets from electron-proton and hadron collider experiments, provide tight constraints on the parton distribution function (PDF) in the proton. The extension of these analyses to nuclear parton dist ributions (nPDF) has attracted much interest in recent years. nPDFs are needed as benchmarks for the characterization of hot QCD matter in nucleus-nucleus collisions, and attract further interest since they may show novel signatures of non- linear density-dependent QCD evolution. However, it is not known from first principles whether the factorization of long-range phenomena into process-independent parton distribution, which underlies global PDF extractions for the proton, extends to nuclear effects. As a consequence, assessing the reliability of nPDFs for benchmark calculations goes beyond testing the numerical accuracy of their extraction and requires phenomenological tests of the factorization assumption. Here we argue that a proton-nucleus collision program at the LHC would provide a set of measurements allowing for unprecedented tests of the factorization assumption underlying global nPDF fits.
Collisions between prolate uranium nuclei are used to study how particle production and azimuthal anisotropies depend on initial geometry in heavy-ion collisions. We report the two- and four-particle cumulants, $v_2{2}$ and $v_2{4}$, for charged hadr ons from U+U collisions at $sqrt{s_{rm NN}}$ = 193 GeV and Au+Au collisions at $sqrt{s_{rm NN}}$ = 200 GeV. Nearly fully overlapping collisions are selected based on the amount of energy deposited by spectators in the STAR Zero Degree Calorimeters (ZDCs). Within this sample, the observed dependence of $v_2{2}$ on multiplicity demonstrates that ZDC information combined with multiplicity can preferentially select different overlap configurations in U+U collisions. An initial-state model with gluon saturation describes the slope of $v_2{2}$ as a function of multiplicity in central collisions better than one based on Glauber with a two-component multiplicity model.
Charmonium production at heavy-ion colliders is considered within the comovers interaction model. The formalism is extended by including possible secondary J/psi production through recombination and an estimate of recombination effects is made with n o free parameters involved. The comovers interaction model also includes a comprehensive treatment of initial-state nuclear effects, which are discussed in the context of such high energies. With these tools, the model properly describes the centrality and the rapidity dependence of experimental data at RHIC energy, $sqrt{s}$ = 200 GeV, for both Au+Au and Cu+Cu collisions. Predictions for LHC, $sqrt{s}$ = 5.5 TeV, are presented and the assumptions and extrapolations involved are discussed.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا