ترغب بنشر مسار تعليمي؟ اضغط هنا

The three-point correlation function (3PCF) provides an important view into the clustering of galaxies that is not available to its lower order cousin, the two-point correlation function (2PCF). Higher order statistics, such as the 3PCF, are necessar y to probe the non-Gaussian structure and shape information expected in these distributions. We measure the clustering of spectroscopic galaxies in the Main Galaxy Sample of the Sloan Digital Sky Survey (SDSS), focusing on the shape or configuration dependence of the reduced 3PCF in both redshift and projected space. This work constitutes the largest number of galaxies ever used to investigate the reduced 3PCF, using over 220,000 galaxies in three volume-limited samples. We find significant configuration dependence of the reduced 3PCF at 3-27 Mpc/h, in agreement with LCDM predictions and in disagreement with the hierarchical ansatz. Below 6 Mpc/h, the redshift space reduced 3PCF shows a smaller amplitude and weak configuration dependence in comparison with projected measurements suggesting that redshift distortions, and not galaxy bias, can make the reduced 3PCF appear consistent with the hierarchical ansatz. The reduced 3PCF shows a weaker dependence on luminosity than the 2PCF, with no significant dependence on scales above 9 Mpc/h. On scales less than 9 Mpc/h, the reduced 3PCF appears more affected by galaxy color than luminosty. We demonstrate the extreme sensitivity of the 3PCF to systematic effects such as sky completeness and binning scheme, along with the difficulty of resolving the errors. Some comparable analyses make assumptions that do not consistently account for these effects.
Virtual observatories will give astronomers easy access to an unprecedented amount of data. Extracting scientific knowledge from these data will increasingly demand both efficient algorithms as well as the power of parallel computers. Nearly all effi cient analyses of large astronomical datasets use trees as their fundamental data structure. Writing efficient tree-based techniques, a task that is time-consuming even on single-processor computers, is exceedingly cumbersome on massively parallel platforms (MPPs). Most applications that run on MPPs are simulation codes, since the expense of developing them is offset by the fact that they will be used for many years by many researchers. In contrast, data analysis codes change far more rapidly, are often unique to individual researchers, and therefore accommodate little reuse. Consequently, the economics of the current high-performance computing development paradigm for MPPs does not favor data analysis applications. We have therefore built a library, called Ntropy, that provides a flexible, extensible, and easy-to-use way of developing tree-based data analysis algorithms for both serial and parallel platforms. Our experience has shown that not only does our library save development time, it can also deliver excellent serial performance and parallel scalability. Furthermore, Ntropy makes it easy for an astronomer with little or no parallel programming experience to quickly scale their application to a distributed multiprocessor environment. By minimizing development time for efficient and scalable data analysis, we enable wide-scale knowledge discovery on massive datasets.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا