ترغب بنشر مسار تعليمي؟ اضغط هنا

Methods Included: Standardizing Computational Reuse and Portability with the Common Workflow Language

53   0   0.0 ( 0 )
 نشر من قبل Michael Crusoe
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Computational Workflows are widely used in data analysis, enabling innovation and decision-making. In many domains (bioinformatics, image analysis, & radio astronomy) the analysis components are numerous and written in multiple different computer languages by third parties. However, many competing workflow systems exist, severely limiting portability of such workflows, thereby hindering the transfer of workflows between different systems, between different projects and different settings, leading to vendor lock-ins and limiting their generic re-usability. Here we present the Common Workflow Language (CWL) project which produces free and open standards for describing command-line tool based workflows. The CWL standards provide a common but reduced set of abstractions that are both used in practice and implemented in many popular workflow systems. The CWL language is declarative, which allows expressing computational workflows constructed from diverse software tools, executed each through their command-line interface. Being explicit about the runtime environment and any use of software containers enables portability and reuse. Workflows written according to the CWL standards are a reusable description of that analysis that are runnable on a diverse set of computing environments. These descriptions contain enough information for advanced optimization without additional input from workflow authors. The CWL standards support polylingual workflows, enabling portability and reuse of such workflows, easing for example scholarly publication, fulfilling regulatory requirements, collaboration in/between academic research and industry, while reducing implementation costs. CWL has been taken up by a wide variety of domains, and industries and support has been implemented in many major workflow systems.



قيم البحث

اقرأ أيضاً

The user-facing components of the Cyberinfrastructure (CI) ecosystem, science gateways and scientific workflow systems, share a common need of interfacing with physical resources (storage systems and execution environments) to manage data and execute codes (applications). However, there is no uniform, platform-independent way to describe either the resources or the applications. To address this, we propose uniform semantics for describing resources and applications that will be relevant to a diverse set of stakeholders. We sketch a solution to the problem of a common description and catalog of resources: we describe an approach to implementing a resource registry for use by the community and discuss potential approaches to some long-term challenges. We conclude by looking ahead to the application description language.
94 - E. Calore , A. Gabbana , J. Kraus 2017
An increasingly large number of HPC systems rely on heterogeneous architectures combining traditional multi-core CPUs with power efficient accelerators. Designing efficient applications for these systems has been troublesome in the past as accelerato rs could usually be programmed using specific programming languages threatening maintainability, portability and correctness. Several new programming environments try to tackle this problem. Among them, OpenACC offers a high-level approach based on compiler directive clauses to mark regions of existing C, C++ or Fortran codes to run on accelerators. This approach directly addresses code portability, leaving to compilers the support of each different accelerator, but one has to carefully assess the relative costs of portable approaches versus computing efficiency. In this paper we address precisely this issue, using as a test-bench a massively parallel Lattice Boltzmann algorithm. We first describe our multi-node implementation and optimization of the algorithm, using OpenACC and MPI. We then benchmark the code on a variety of processors, including traditional CPUs and GPUs, and make accurate performance comparisons with other GPU implementations of the same algorithm using CUDA and OpenCL. We also asses the performance impact associated to portable programming, and the actual portability and performance-portability of OpenACC-based applications across several state-of-the- art architectures.
We discuss the issue of what we call {em incentive mismatch}, a fundamental problem with public blockchains supported by economic incentives. This is an open problem, but one potential solution is to make application portable. Portability is desirabl e for applications on private blockchains. Then, we present examples of middleware designs that enable application portability and, in particular, support migration between blockchains.
A determinacy race occurs if two or more logically parallel instructions access the same memory location and at least one of them tries to modify its content. Races often lead to nondeterministic and incorrect program behavior. A data race is a speci al case of a determinacy race which can be eliminated by associating a mutual-exclusion lock or allowing atomic accesses to the memory location. However, such solutions can reduce parallelism by serializing all accesses to that location. For associative and commutative updates, reducers allow parallel race-free updates at the expense of using some extra space. We ask the following question. Given a fixed budget of extra space to mitigate the cost of races in a parallel program, which memory locations should be assigned reducers and how should the space be distributed among the reducers in order to minimize the overall running time? We argue that the races can be captured by a directed acyclic graph (DAG), with nodes representing memory cells and arcs representing read-write dependencies between cells. We then formulate our optimization problem on DAGs. We concentrate on a variation of this problem where space reuse among reducers is allowed by routing extra space along a source to sink path of the DAG and using it in the construction of reducers along the path. We consider two reducers and the corresponding duration functions (i.e., reduction time as a function of space budget). We generalize our race-avoiding space-time tradeoff problem to a discrete resource-time tradeoff problem with general non-increasing duration functions and resource reuse over paths. For general DAGs, the offline problem is strongly NP-hard under all three duration functions, and we give approximation algorithms. We also prove hardness of approximation for the general resource-time tradeoff problem and give a pseudo-polynomial time algorithm for series-parallel DAGs.
The role of scalable high-performance workflows and flexible workflow management systems that can support multiple simulations will continue to increase in importance. For example, with the end of Dennard scaling, there is a need to substitute a sing le long running simulation with multiple repeats of shorter simulations, or concurrent replicas. Further, many scientific problems involve ensembles of simulations in order to solve a higher-level problem or produce statistically meaningful results. However most supercomputing software development and performance enhancements have focused on optimizing single- simulation performance. On the other hand, there is a strong inconsistency in the definition and practice of workflows and workflow management systems. This inconsistency often centers around the difference between several different types of workflows, including modeling and simulation, grid, uncertainty quantification, and purely conceptual workflows. This work explores this phenomenon by examining the different types of workflows and workflow management systems, reviewing the perspective of a large supercomputing facility, examining the common features and problems of workflow management systems, and finally presenting a proposed solution based on the concept of common building blocks. The implications of the continuing proliferation of workflow management systems and the lack of interoperability between these systems are discussed from a practical perspective. In doing so, we have begun an investigation of the design and implementation of open workflow systems for supercomputers based upon common components.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا