ترغب بنشر مسار تعليمي؟ اضغط هنا

HepForge: A lightweight development environment for HEP software

64   0   0.0 ( 0 )
 نشر من قبل Andrew Buckley
 تاريخ النشر 2006
  مجال البحث
والبحث باللغة English
 تأليف A. Buckley




اسأل ChatGPT حول البحث

Setting up the infrastructure to manage a software project can become a task as significant writing the software itself. A variety of useful open source tools are available, such as Web-based viewers for version control systems, wikis for collaborative discussions and bug-tracking systems, but their use in high-energy physics, outside large collaborations, is insubstantial. Understandably, physicists would rather do physics than configure project management tools. We introduce the CEDAR HepForge system, which provides a lightweight development environment for HEP software. Services available as part of HepForge include the above-mentioned tools as well as mailing lists, shell accounts, archiving of releases and low-maintenance Web space. HepForge also exists to promote best-practice software development methods and to provide a central repository for re-usable HEP software and phenomenology codes.

قيم البحث

اقرأ أيضاً

Long term sustainability of the high energy physics (HEP) research software ecosystem is essential for the field. With upgrades and new facilities coming online throughout the 2020s this will only become increasingly relevant throughout this decade. Meeting this sustainability challenge requires a workforce with a combination of HEP domain knowledge and advanced software skills. The required software skills fall into three broad groups. The first is fundamental and generic software engineering (e.g. Unix, version control,C++, continuous integration). The second is knowledge of domain specific HEP packages and practices (e.g., the ROOT data format and analysis framework). The third is more advanced knowledge involving more specialized techniques. These include parallel programming, machine learning and data science tools, and techniques to preserve software projects at all scales. This paper dis-cusses the collective software training program in HEP and its activities led by the HEP Software Foundation (HSF) and the Institute for Research and Innovation in Software in HEP (IRIS-HEP). The program equips participants with an array of software skills that serve as ingredients from which solutions to the computing challenges of HEP can be formed. Beyond serving the community by ensuring that members are able to pursue research goals, this program serves individuals by providing intellectual capital and transferable skills that are becoming increasingly important to careers in the realm of software and computing, whether inside or outside HEP
60 - S. Ryzhikov 2020
Meta-software for data acquisition (DAQ) is a new approach to design the DAQ systems for experimental setups in experiments in high energy physics (HEP). It abstracts from experiment-specific data processing logic, but reflects it through configurati on. It is also intended to substitute highly integrated DAQ software for a swarm of single-functional components, orchestrated by universal meta-software.
Particle physics has an ambitious and broad experimental programme for the coming decades. This programme requires large investments in detector hardware, either to build new facilities and experiments, or to upgrade existing ones. Similarly, it requ ires commensurate investment in the R&D of software to acquire, manage, process, and analyse the shear amounts of data to be recorded. In planning for the HL-LHC in particular, it is critical that all of the collaborating stakeholders agree on the software goals and priorities, and that the efforts complement each other. In this spirit, this white paper describes the R&D activities required to prepare for this software upgrade.
The Scalable Systems Laboratory (SSL), part of the IRIS-HEP Software Institute, provides Institute participants and HEP software developers generally with a means to transition their R&D from conceptual toys to testbeds to production-scale prototypes . The SSL enables tooling, infrastructure, and services supporting the innovation of novel analysis and data architectures, development of software elements and tool-chains, reproducible functional and scalability testing of service components, and foundational systems R&D for accelerated services developed by the Institute. The SSL is constructed with a core team having expertise in scale testing and deployment of services across a wide range of cyberinfrastructure. The core team embeds and partners with other areas in the Institute, and with LHC and other HEP development and operations teams as appropriate, to define investigations and required service deployment patterns. We describe the approach and experiences with early application deployments, including analysis platforms and intelligent data delivery systems.
Petabytes of data are to be processed and stored requiring millions of CPU-years in high energy particle (HEP) physics event simulation. This enormous demand is handled in worldwide distributed computing centers as part of the LHC computing grid. The se significant resources require a high quality and efficient production and the early detection of potential errors. In this article we present novel monitoring techniques in a Grid environment to collect quality measures during job execution. This allows online assessment of data quality information to avoid configuration errors or inappropriate settings of simulation parameters and therefore is able to save time and resources.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا