Do you want to publish a course? Click here

Methods for a blind analysis of isobar data collected by the STAR collaboration

50   0   0.0 ( 0 )
 Added by James Drachenberg
 Publication date 2019
  fields
and research's language is English




Ask ChatGPT about the research

In 2018, the STAR collaboration collected data from $_{44}^{96}Ru+_{44}^{96}Ru$ and $_{40}^{96}Zr+_{40}^{96}Zr$ at $sqrt{s_{NN}}=200$ GeV to search for the presence of the chiral magnetic effect in collisions of nuclei. The isobar collision species alternated frequently between $_{44}^{96}Ru+_{44}^{96}Ru$ and $_{40}^{96}Zr+_{40}^{96}Zr$. In order to conduct blind analyses of studies related to the chiral magnetic effect in these isobar data, STAR developed a three-step blind analysis procedure. Analysts are initially provided a reference sample of data, comprised of a mix of events from the two species, the order of which respects time-dependent changes in run conditions. After tuning analysis codes and performing time-dependent quality assurance on the reference sample, analysts are provided a species-blind sample suitable for calculating efficiencies and corrections for individual $approx30$-minute data-taking runs. For this sample, species-specific information is disguised, but individual output files contain data from a single isobar species. Only run-by-run corrections and code alteration subsequent to these corrections are allowed at this stage. Following these modifications, the frozen code is passed over the fully un-blind data, completing the blind analysis. As a check of the feasibility of the blind analysis procedure, analysts completed a mock data challenge, analyzing data from $Au+Au$ collisions at $sqrt{s_{NN}}=27$ GeV, collected in 2018. The $Au+Au$ data were prepared in the same manner intended for the isobar blind data. The details of the blind analysis procedure and results from the mock data challenge are presented.



rate research

Read More

The chiral magnetic effect (CME) is predicted to occur as a consequence of a local violation of $cal P$ and $cal CP$ symmetries of the strong interaction amidst a strong electro-magnetic field generated in relativistic heavy-ion collisions. Experimen tal manifestation of the CME involves a separation of positively and negatively charged hadrons along the direction of the magnetic field. Previous measurements of the CME-sensitive charge-separation observables remain inconclusive because of large background contributions. In order to better control the influence of signal and backgrounds, the STAR Collaboration performed a blind analysis of a large data sample of approximately 3.8 billion isobar collisions of $^{96}_{44}$Ru+$^{96}_{44}$Ru and $^{96}_{40}$Zr+$^{96}_{40}$Zr at $sqrt{s_{rm NN}}=200$ GeV. Prior to the blind analysis, the CME signatures are predefined as a significant excess of the CME-sensitive observables in Ru+Ru collisions over those in Zr+Zr collisions, owing to a larger magnetic field in the former. A precision down to 0.4% is achieved, as anticipated, in the relative magnitudes of the pertinent observables between the two isobar systems. Observed differences in the multiplicity and flow harmonics at the matching centrality indicate that the magnitude of the CME background is different between the two species. No CME signature that satisfies the predefined criteria has been observed in isobar collisions in this blind analysis.
This release includes data and information necessary to perform independent analyses of the COHERENT result presented in Akimov et al., arXiv:1708.01294 [nucl-ex]. Data is shared in a binned, text-based format, including both signal and background regions, so that counts and associated uncertainties can be quantitatively calculated for the purpose of separate analyses. This document describes the included information and its format, offering some guidance on use of the data. Accompanying code examples show basic interaction with the data using Python.
Release of COHERENT collaboration data from the first detection of coherent elastic neutrino-nucleus scattering (CEvNS) on argon. This release corresponds with the results of Analysis A published in Akimov et al., arXiv:2003.10630 [nucl-ex]. Data is shared in a binned, text-based format representing both signal and backgrounds along with associated uncertainties such that the included data can be used to perform independent analyses. This document describes the contents of the data release as well as guidance on the use of the data. Included example code in C++ (ROOT) and Python show one possible use of the included data.
We introduce a framework for statistical estimation that leverages knowledge of how samples are collected but makes no distributional assumptions on the data values. Specifically, we consider a population of elements $[n]={1,ldots,n}$ with corresponding data values $x_1,ldots,x_n$. We observe the values for a sample set $A subset [n]$ and wish to estimate some statistic of the values for a target set $B subset [n]$ where $B$ could be the entire set. Crucially, we assume that the sets $A$ and $B$ are drawn according to some known distribution $P$ over pairs of subsets of $[n]$. A given estimation algorithm is evaluated based on its worst-case, expected error where the expectation is with respect to the distribution $P$ from which the sample $A$ and target sets $B$ are drawn, and the worst-case is with respect to the data values $x_1,ldots,x_n$. Within this framework, we give an efficient algorithm for estimating the target mean that returns a weighted combination of the sample values--where the weights are functions of the distribution $P$ and the sample and target sets $A$, $B$--and show that the worst-case expected error achieved by this algorithm is at most a multiplicative $pi/2$ factor worse than the optimal of such algorithms. The algorithm and proof leverage a surprising connection to the Grothendieck problem. This framework, which makes no distributional assumptions on the data values but rather relies on knowledge of the data collection process, is a significant departure from typical estimation and introduces a uniform algorithmic analysis for the many natural settings where membership in a sample may be correlated with data values, such as when sampling probabilities vary as in importance sampling, when individuals are recruited into a sample via a social network as in snowball sampling, or when samples have chronological structure as in selective prediction.
The PHENIX collaboration presents a concept for a major upgrade to the PHENIX detector at the Relativistic Heavy Ion Collider (RHIC). This upgrade, referred to as sPHENIX, brings exciting new capability to the RHIC program by opening new and important channels for experimental investigation and utilizing fully the luminosity of the recently upgraded RHIC facility. sPHENIX enables a compelling jet physics program that will address fundamental questions about the nature of the strongly coupled quark-gluon plasma discovered experimentally at RHIC to be a perfect fluid. The upgrade concept addresses specific questions whose answers are necessary to advance our understanding of the quark-gluon plasma: (1) How to reconcile the observed strongly coupled quark-gluon plasma with the asymptotically free theory of quarks and gluons? (2) What are the dynamical changes to the quark-gluon plasma in terms of quasiparticles and excitations as a function of temperature? (3) How sharp is the transition of the quark-gluon plasma from the most strongly coupled regime near Tc to a weakly coupled system of partons known to emerge at asymptotically high temperatures? In three Appendices, we detail the additional physics capabilities gained through further upgrades: (A) two midrapidity detector additions, (B) a forward rapidity upgrade, and (C) an evolution to an ePHENIX detector suitable for a future Electron Ion Collider at RHIC.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا