Do you want to publish a course? Click here

Management of Grid Jobs and Information within SAMGrid

85   0   0.0 ( 0 )
 Added by Igor Terekhov
 Publication date 2003
and research's language is English




Ask ChatGPT about the research

We describe some of the key aspects of the SAMGrid system, used by the D0 and CDF experiments at Fermilab. Having sustained success of the data handling part of SAMGrid, we have developed new services for job and information services. Our job management is rooted in CondorG and uses enhancements that are general applicability for HEP grids. Our information system is based on a uniform framework for configuration management based on XML data representation and processing.



rate research

Read More

248 - Wojciech Wislicki 2007
We outline design and lines of development of autonomous tools for the computing Grid management, monitoring and optimization. The management is proposed to be based on the notion of utility. Grid optimization is considered to be application-oriented. A generic Grid simulator is proposed as an optimization tool for Grid structure and functionality.
Job submissions of parallel applications to production supercomputer systems will have to be carefully tuned in terms of the job submission parameters to obtain minimum response times. In this work, we have developed an end-to-end resource management framework that uses predictions of queue waiting and execution times to minimize response times of user jobs submitted to supercomputer systems. Our method for predicting queue waiting times adaptively chooses a prediction method based on the cluster structure of similar jobs. Our strategy for execution time predictions dynamically learns the impact of load on execution times and uses this to predict a set of execution time ranges for the target job. We have developed two resource management techniques that employ these predictions, one that selects the number of processors for execution and the other that also dynamically changes the job submission time. Using workload simulations of large supercomputer traces, we show large-scale improvements in predictions and reductions in response times over existing techniques and baseline strategies.
Selecting optimal resources for submitting jobs on a computational Grid or accessing data from a data grid is one of the most important tasks of any Grid middleware. Most modern Grid software today satisfies this responsibility and gives a best-effort performance to solve this problem. Almost all decisions regarding scheduling and data access are made by the software automatically, giving users little or no control over the entire process. To solve this problem, a more interactive set of services and middleware is desired that provides users more information about Grid weather, and gives them more control over the decision making process. This paper presents a set of services that have been developed to provide more interactive resource management capabilities within the Grid Analysis Environment (GAE) being developed collaboratively by Caltech, NUST and several other institutes. These include a steering service, a job monitoring service and an estimator service that have been designed and written using a common Grid-enabled Web Services framework named Clarens. The paper also presents a performance analysis of the developed services to show that they have indeed resulted in a more interactive and powerful system for user-centric Grid-enabled physics analysis.
The current Cloud infrastructure services (IaaS) market employs a resource-based selling model: customers rent nodes from the provider and pay per-node per-unit-time. This selling model places the burden upon customers to predict their job resource requirements and durations. Inaccurate prediction by customers can result in over-provisioning of resources, or under-provisioning and poor job performance. Thanks to improved resource virtualization and multi-tenant performance isolation, as well as common frameworks for batch jobs, such as MapReduce, Cloud providers can predict job completion times more accurately. We offer a new definition of QoS-levels in terms of job completion times and we present a new QoS-based selling mechanism for batch jobs in a multi-tenant OpenStack cluster. Our experiments show that the QoS-based solution yields up to 40% improvement over the revenue of more standard selling mechanisms based on a fixed per-node price across various demand and supply conditions in a 240-VCPU OpenStack cluster.
Spatial Data Infrastructure (SDI) is an important concept for sharing spatial data across the web. With cumulative techniques with spatial cloud computing and fog computing, SDI has the greater potential and has been emerged as a tool for processing, analysis and transmission of spatial data. The Fog computing is a paradigm where Fog devices help to increase throughput and reduce latency at the edge of the client with respect to cloud computing environment. This paper proposed and developed a fog computing based SDI framework for mining analytics from spatial big data for mineral resources management in India. We built a prototype using Raspberry Pi, an embedded microprocessor. We validated by taking suitable case study of mineral resources management in India by doing preliminary analysis including overlay analysis. Results showed that fog computing hold a great promise for analysis of spatial data. We used open source GIS i.e. QGIS and QIS plugin for reducing the transmission to cloud from the fog node.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا