Do you want to publish a course? Click here

Automated Dynamic Concurrency Analysis for Go

118   0   0.0 ( 0 )
 Added by Saeed Taheri
 Publication date 2021
and research's language is English




Ask ChatGPT about the research

The concurrency features of the Go language have proven versatile in the development of a number of concurrency systems. However, correctness methods to address challenges in Go concurrency debugging have not received much attention. In this work, we present an automatic dynamic tracing mechanism that efficiently captures and helps analyze the whole-program concurrency model. Using an enhancement to the built-in tracer package of Go and a framework that collects dynamic traces from application execution, we enable thorough post-mortem analysis for concurrency debugging. Preliminary results about the effectiveness and scalability (up to more than 2K goroutines) of our proposed dynamic tracing for concurrent debugging are presented. We discuss the future direction for exploiting dynamic tracing towards accelerating concurrent bug exposure.



rate research

Read More

Service Level Agreements (SLA) are commonly used to specify the quality attributes between cloud service providers and the customers. A violation of SLAs can result in high penalties. To allow the analysis of SLA compliance before the services are deployed, we describe in this paper an approach for SLA-aware deployment of services on the cloud, and illustrate its workflow by means of a case study. The approach is based on formal models combined with static analysis tools and generated runtime monitors. As such, it fits well within a methodology combining software development with information technology operations (DevOps).
A consistent theme in software experimentation at Microsoft has been solving problems of experimentation at scale for a diverse set of products. Running experiments at scale (i.e., many experiments on many users) has become state of the art across the industry. However, providing a single platform that allows software experimentation in a highly heterogenous and constantly evolving ecosystem remains a challenge. In our case, heterogeneity spans multiple dimensions. First, we need to support experimentation for many types of products: websites, search engines, mobile apps, operating systems, cloud services and others. Second, due to the diversity of the products and teams using our platform, it needs to be flexible enough to analyze data in multiple compute fabrics (e.g. Spark, Azure Data Explorer), with a way to easily add support for new fabrics if needed. Third, one of the main factors in facilitating growth of experimentation culture in an organization is to democratize metric definition and analysis processes. To achieve that, our system needs to be simple enough to be used not only by data scientists, but also engineers, product managers and sales teams. Finally, different personas might need to use the platform for different types of analyses, e.g. dashboards or experiment analysis, and the platform should be flexible enough to accommodate that. This paper presents our solution to the problems of heterogeneity listed above.
Graphs are widespread data structures used to model a wide variety of problems. The sheer amount of data to be processed has prompted the creation of a myriad of systems that help us cope with massive scale graphs. The pressure to deliver fast responses to queries on the graph is higher than ever before, as it is demanded by many applications (e.g. online recommendations, auctions, terrorism protection, etc.). In addition, graphs change continuously (so do the real world entities that typically represent). Systems must be ready for both: near real-time and dynamic massive graphs. We survey systems taking their scalability, real-time potential and capability to support dynamic changes to the graph as driving guidelines. The main techniques and limitations are distilled and categorised. The algorithms run on top of graph systems are not ready for prime time dynamism either. Therefore,a short overview on dynamic graph algorithms has also been included.
In this paper, we develop RCC, the first unified and comprehensive RDMA-enabled distributed transaction processing framework supporting six serializable concurrency control protocols: not only the classical protocols NOWAIT, WAITDIE, and OCC, but also more advanced MVCC and SUNDIAL, and even CALVIN, the deterministic concurrency control protocol. Our goal is to unbiasedly compare the protocols in a common execution environment with the concurrency control protocol being the only changeable component. We focus on the correct and efficient implementation using key techniques, such as co-routines, outstanding requests, and doorbell batching, with two-sided and one-sided communication primitives. Based on RCC, we get the deep insights that cannot be obtained by any existing systems. Most importantly, we obtain the execution stage latency breakdowns with one-sided and two-sided primitive for each protocol, which are analyzed to develop more efficient hybrid implementations. Our results show that three hybrid designs are indeed better than both one-sided and two-sided implementations by up to 17.8%. We believe that RCC is a significant advance over the state-of-the-art; it can both provide performance insights and be used as the common infrastructure for fast prototyping new implementations.
In the past decade, high performance compute capabilities exhibited by heterogeneous GPGPU platforms have led to the popularity of data parallel programming languages such as CUDA and OpenCL. Such languages, however, involve a steep learning curve as well as developing an extensive understanding of the underlying architecture of the compute devices in heterogeneous platforms. This has led to the emergence of several High Performance Computing frameworks which provide high-level abstractions for easing the development of data-parallel applications on heterogeneous platforms. However, the scheduling decisions undertaken by such frameworks only exploit coarse-grained concurrency in data parallel applications. In this paper, we propose PySchedCL, a framework which explores fine-grained concurrency aware scheduling decisions that harness the power of heterogeneous CPU/GPU architectures efficiently. %, a feature which is not provided by existing HPC frameworks. We showcase the efficacy of such scheduling mechanisms over existing coarse-grained dynamic scheduling schemes by conducting extensive experimental evaluations for a Machine Learning based inferencing application.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا