ترغب بنشر مسار تعليمي؟ اضغط هنا

Continuous Performance Benchmarking Framework for ROOT

72   0   0.0 ( 0 )
 نشر من قبل Oksana Shadura
 تاريخ النشر 2018
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Foundational software libraries such as ROOT are under intense pressure to avoid software regression, including performance regressions. Continuous performance benchmarking, as a part of continuous integration and other code quality testing, is an industry best-practice to understand how the performance of a software product evolves over time. We present a framework, built from industry best practices and tools, to help to understand ROOT code performance and monitor the efficiency of the code for a several processor architectures. It additionally allows historical performance measurements for ROOT I/O, vectorization and parallelization sub-systems.

قيم البحث

اقرأ أيضاً

The ROOT software framework is foundational for the HEP ecosystem, providing capabilities such as IO, a C++ interpreter, GUI, and math libraries. It uses object-oriented concepts and build-time components to layer between them. We believe additional layering formalisms will benefit ROOT and its users. We present the modularization strategy for ROOT which aims to formalize the description of existing source components, making available the dependencies and other metadata externally from the build system, and allow post-install additions of functionality in the runtime environment. components can then be grouped into packages, installable from external repositories to deliver post-install step of missing packages. This provides a mechanism for the wider software ecosystem to interact with a minimalistic install. Reducing intra-component dependencies improves maintainability and code hygiene. We believe helping maintain the smallest base install possible will help embedding use cases. The modularization effort draws inspiration from the Java, Python, and Swift ecosystems. Keeping aligned with the modern C++, this strategy relies on forthcoming features such as C++ modules. We hope formalizing the component layer will provide simpler ROOT installs, improve extensibility, and decrease the complexity of embedding in other ecosystems
ROOT is a large code base with a complex set of build-time dependencies; there is a significant difference in compilation time between the core of ROOT and the full-fledged deployment. We present results on a delayed build for internal ROOT packages and external packages. This gives the ability to offer a lightweight core of ROOT, later extended by building additional modules to extend the functionality of ROOT. As a part of this work, we have improved the separation of ROOT code into distinct modules and packages with minimal dependencies. This approach gives users better flexibility and the possibility to combine various build features without rebuilding from scratch. Dependency hell is a common problem found in software and particularly in HEP software ecosystem. We would like to discuss an improvement of artifact management (lazy-install) system as a solution to the dependency hell problem. HEP software stack usually consists of multiple sub-projects with dependencies. The development model is often distributed, independent and non-coherent among the sub-projects. We believe that software should be designed to take advantage of other software components that are already available, or have already been designed and implemented for use elsewhere rather than reinventing the wheel. In our contribution, we will present our approach to artifact management system of ROOT together with a set of examples and use cases.
87 - Yepang Liu , Lili Wei , Chang Xu 2016
Resource leak bugs in Android apps are pervasive and can cause serious performance degradation and system crashes. In recent years, several resource leak detection techniques have been proposed to assist Android developers in correctly managing syste m resources. Yet, there exist no common bug benchmarks for effectively and reliably comparing such techniques and quantitatively evaluating their strengths and weaknesses. This paper describes our initial contribution towards constructing such a benchmark. To locate real resource leak bugs, we mined 124,215 code revisions of 34 large-scale open-source Android apps. We successfully found 298 fixed resource leaks, which cover a diverse set of resource classes, from 32 out of the 34 apps. To understand the characteristics of these bugs, we conducted an empirical study, which revealed the root causes of frequent resource leaks in Android apps and common patterns of faults made by developers. With our findings, we further implemented a static checker to detect a common pattern of resource leaks in Android apps. Experiments showed that the checker can effectively locate real resource leaks in popular Android apps, confirming the usefulness of our work.
The term randomized benchmarking refers to a collection of protocols that in the past decade have become the gold standard for characterizing quantum gates. These protocols aim at efficiently estimating the quality of a set of quantum gates in a way that is resistant to state preparation and measurement errors, and over the years ma
Modern applications increasingly interact with web APIs -- reusable components, deployed and operated outside the application, and accessed over the network. Their existence, arguably, spurs application innovations, making it easy to integrate data o r functionalities. While previous work has analyzed the ecosystem of web APIs and their design, little is known about web API quality at runtime. This gap is critical, as qualities including availability, latency, or provider security preferences can severely impact applications and user experience. In this paper, we revisit a 3-month, geo-distributed benchmark of popular web APIs, originally performed in 2015. We repeat this benchmark in 2018 and compare results from these two benchmarks regarding availability and latency. We furthermore introduce new results from assessing provider security preferences, collected both in 2015 and 2018, and results from our attempts to reach out to API providers with the results from our 2015 experiments. Our extensive experiments show that web API qualities vary 1.) based on the geo-distribution of clients, 2.) during our individual experiments, and 3.) between the two experiments. Our findings provide evidence to foster the discussion around web API quality, and can act as a basis for the creation of tools and approaches to mitigate quality issues.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا