No Arabic abstract
The interpretation of the emergent collective behaviour of atomic nuclei in terms of deformed intrinsic shapes [1] is at the heart of our understanding of the rich phenomenology of their structure, ranging from nuclear energy to astrophysical applications across a vast spectrum of energy scales. A new window onto the deformation of nuclei has been recently opened with the realization that nuclear collision experiments performed at high-energy colliders, such as the CERN Large Hadron Collider (LHC), enable experimenters to identify the relative orientation of the colliding ions in a way that magnifies the manifestations of their intrinsic deformation [2]. Here we apply this technique to LHC data on collisions of $^{129}$Xe nuclei [3-5] to exhibit the first evidence of non-axiality in the ground state of ions collided at high energy. We predict that the low-energy structure of $^{129}$Xe is triaxial (a spheroid with three unequal axes), and show that such deformation can be determined from high-energy data. This result demonstrates the unique capabilities of precision collider machines such as the LHC as new means to perform imaging of the collective structure of atomic nuclei.
We study the diffusion of charm and beauty in the early stage of high energy nuclear collisions at RHIC and LHC energies, considering the interaction of these heavy quarks with the evolving Glasma by means of the Wong equations. In comparison with previous works, we add the longitudinal expansion as well as we estimate the effect of energy loss due to gluon radiation. We find that heavy quarks diffuse in the strong transverse color fields in the very early stage (0.2-0.3 fm/c) and this leads to a suppression at low $p_T$ and enhancement at intermediate low $p_T$. The shape of the observed nuclear suppression factor obtained within our calculations is in qualitative agreement with the experimental results of the same quantity for $D-$mesons in proton-nucleus collisions. We compute the nuclear suppression factor in nucleus-nucleus collisions as well, for both charm and beauty, finding a substantial impact of the evolving Glasma phase on these, suggesting that initialization of heavy quarks spectra in the quark-gluon plasma phase should not neglect the early evolution in the strong gluon fields.
For the foreseeable future, the exploration of the high-energy frontier will be the domain of the Large Hadron Collider (LHC). Of particular significance will be its high-luminosity upgrade (HL-LHC), which will operate until the mid-2030s. In this endeavour, for the full exploitation of the HL-LHC physics potential an improved understanding of the parton distribution functions (PDFs) of the proton is critical. The HL-LHC program would be uniquely complemented by the proposed Large Hadron electron Collider (LHeC), a high-energy lepton-proton and lepton-nucleus collider based at CERN. In this work, we build on our recent PDF projections for the HL-LHC to assess the constraining power of the LHeC measurements of inclusive and heavy quark structure functions. We find that the impact of the LHeC would be significant, reducing PDF uncertainties by up to an order of magnitude in comparison to state-of-the-art global fits. In comparison to the HL-LHC projections, the PDF constraints from the LHeC are in general more significant for small and intermediate values of the momentum fraction x. At higher values of x, the impact of the LHeC and HL-LHC data is expected to be of a comparable size, with the HL-LHC constraints being more competitive in some cases, and the LHeC ones in others. Our results illustrate the encouraging complementarity of the HL-LHC and the LHeC in terms of charting the quark and gluon structure of the proton.
Collinear factorized perturbative QCD model predictions are compared for p+Pb at 4.4A TeV to test nuclear shadowing of parton distribution at the Large Hadron Collider (LHC). The nuclear modification factor (NMF), R_{pPb}(y=0,p_T<20 GeV/c) = dn_{p Pb} /(N_{coll}(b)dn_{pp}), is computed with electron-nucleus (e+A) global fit with different nuclear shadow distributions and compared to fixed Q^2 shadow ansatz used in Monte Carlo Heavy Ion Jet Interacting Generator (HIJING) type models. Due to rapid DGLAP reduction of shadowing with increasing Q^2 used in e+A global fit, our results confirm that no significant initial state suppression is expected (R_{pPb} (p_T) = 1 pm 0.1) in the p_T range 5 to 20 GeV/ c. In contrast, the fixed Q^2 shadowing models assumed in HIJING type models predict in the above p_T range a sizable suppression, R_{pPb} (p_T) = 0.6-0.7 at mid-pseudorapidity that is similar to the color glass condensate (CGC) model predictions. For central (N_{coll} = 12) p+ Pb collisions and at forward pseudorapidity (eta = 6) the HIJING type models predict smaller values of nuclear modification factors (R_{pPb}(p_T)) than in minimum bias events at mid-pseudorapidity (eta = 0). Observation of R_{pPb}(p_T= 5-20 GeV/c) less than 0.6 for minimum bias p+A collisions would pose a serious difficulty for separating initial from final state interactions in Pb+Pb collisions at LHC energies.
We investigate new physics scenarios where systems comprised of a single top quark accompanied by missing transverse energy, dubbed monotops, can be produced at the LHC. Following a simplified model approach, we describe all possible monotop production modes via an effective theory and estimate the sensitivity of the LHC, assuming 20 fb$^{-1}$ of collisions at a center-of-mass energy of 8 TeV, to the observation of a monotop state. Considering both leptonic and hadronic top quark decays, we show that large fractions of the parameter space are reachable and that new physics particles with masses ranging up to 1.5 TeV can leave hints within the 2012 LHC dataset, assuming moderate new physics coupling strengths.
The Large Hadron Collider (LHC), the particle accelerator operating at CERN, is probably the most complex and ambitious scientific project ever accomplished by humanity. The sheer size of the enterprise, in terms of financial and human resources, naturally raises the question whether society should support such costly basic-research programs. I address this question here by first reviewing the process that led to the emergence of Big Science and the role of large projects in the development of science and technology. I then compare the methodologies of Small and Big Science, emphasizing their mutual linkage. Finally, after examining the cost of Big Science projects, I highlight several general aspects of their beneficial implications for society.