Do you want to publish a course? Click here

HEP Applications Evaluation of the EDG Testbed and Middleware

73   0   0.0 ( 0 )
 Added by Stephen Burke
 Publication date 2003
and research's language is English




Ask ChatGPT about the research

Workpackage 8 of the European Datagrid project was formed in January 2001 with representatives from the four LHC experiments, and with experiment independent people from five of the six main EDG partners. In September 2002 WP8 was strengthened by the addition of effort from BaBar and D0. The original mandate of WP8 was, following the definition of short- and long-term requirements, to port experiment software to the EDG middleware and testbed environment. A major additional activity has been testing the basic functionality and performance of this environment. This paper reviews experiences and evaluations in the areas of job submission, data management, mass storage handling, information systems and monitoring. It also comments on the problems of remote debugging, the portability of code, and scaling problems with increasing numbers of jobs, sites and nodes. Reference is made to the pioneeering work of Atlas and CMS in integrating the use of the EDG Testbed into their data challenges. A forward look is made to essential software developments within EDG and to the necessary cooperation between EDG and LCG for the LCG prototype due in mid 2003.



rate research

Read More

64 - R. Sobie 2003
A Grid testbed has been established using resources at 12 sites across Canada involving researchers from particle physics as well as other fields of science. We describe our use of the testbed with the BaBar Monte Carlo production and the ATLAS data challenge software. In each case the remote sites have no application-specific software stored locally and instead access the software and data via AFS and/or GridFTP from servers located in Victoria. In the case of BaBar, an Objectivity database server was used for data storage. We present the results of a series of initial tests of the Grid testbed using both BaBar and ATLAS applications. The initial results demonstrate the feasibility of using generic Grid resources for HEP applications.
WorldGrid is an intercontinental testbed spanning Europe and the US integrating architecturally different Grid implementations based on the Globus toolkit. It has been developed in the context of the DataTAG and iVDGL projects, and successfully demonstrated during the WorldGrid demos at IST2002 (Copenhagen) and SC2002 (Baltimore). Two HEP experiments, ATLAS and CMS, successful exploited the WorldGrid testbed for executing jobs simulating the response of their detectors to physics eve nts produced by real collisions expected at the LHC accelerator starting from 2007. This data intensive activity has been run since many years on local dedicated computing farms consisting of hundreds of nodes and Terabytes of disk and tape storage. Within the WorldGrid testbed, for the first time HEP simulation jobs were submitted and run indifferently on US and European resources, despite of their underlying different Grid implementations, and produced data which could be retrieved and further analysed on the submitting machine, or simply stored on the remote resources and registered on a Replica Catalogue which made them available to the Grid for further processing. In this contribution we describe the job submission from Europe for both ATLAS and CMS applications, performed through the GENIUS portal operating on top of an EDG User Interface submitting to an EDG Resource Broker, pointing out the chosen interoperability solutions which made US and European resources equivalent from the applications point of view, the data management in the WorldGrid environment, and the CMS specific production tools which were interfaced to the GENIUS portal.
In the recent years, telecom and computer networks have witnessed new concepts and technologies through Network Function Virtualization (NFV) and Software-Defined Networking (SDN). SDN, which allows applications to have a control over the network, and NFV, which allows deploying network functions in virtualized environments, are two paradigms that are increasingly used for the Internet of Things (IoT). This Internet (IoT) brings the promise to interconnect billions of devices in the next few years rises several scientific challenges in particular those of the satisfaction of the quality of service (QoS) required by the IoT applications. In order to address this problem, we have identified two bottlenecks with respect to the QoS: the traversed networks and the intermediate entities that allows the application to interact with the IoT devices. In this paper, we first present an innovative vision of a network function with respect to their deployment and runtime environment. Then, we describe our general approach of a solution that consists in the dynamic, autonomous, and seamless deployment of QoS management mechanisms. We also describe the requirements for the implementation of such approach. Finally, we present a redirection mechanism, implemented as a network function, allowing the seamless control of the data path of a given middleware traffic. This mechanism is assessed through a use case related to vehicular transportation.
The CMS Integration Grid Testbed (IGT) comprises USCMS Tier-1 and Tier-2 hardware at the following sites: the California Institute of Technology, Fermi National Accelerator Laboratory, the University of California at San Diego, and the University of Florida at Gainesville. The IGT runs jobs using the Globus Toolkit with a DAGMan and Condor-G front end. The virtual organization (VO) is managed using VO management scripts from the European Data Grid (EDG). Gridwide monitoring is accomplished using local tools such as Ganglia interfaced into the Globus Metadata Directory Service (MDS) and the agent based Mona Lisa. Domain specific software is packaged and installed using the Distrib ution After Release (DAR) tool of CMS, while middleware under the auspices of the Virtual Data Toolkit (VDT) is distributed using Pacman. During a continuo us two month span in Fall of 2002, over 1 million official CMS GEANT based Monte Carlo events were generated and returned to CERN for analysis while being demonstrated at SC2002. In this paper, we describe the process that led to one of the worlds first continuously available, functioning grids.
Contemporary high-performance service-oriented applications demand a performance efficient run-time monitoring. In this paper, we analyze a hierarchical publish-subscribe architecture for monitoring service-oriented applications. The analyzed architecture is based on a tree topology and publish-subscribe communication model for aggregation of distributed monitoring data. In order to satisfy interoperability and platform independence of service-orientation, monitoring reports are represented as XML documents. Since XML formatting introduces a significant processing and network load, we analyze the performance of monitoring architecture with respect to the number of monitored nodes, the load of system machines, and the overall latency of the monitoring system.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا