ترغب بنشر مسار تعليمي؟ اضغط هنا

D2.4 Report on the final prototype of programming abstractions for energy-efficient inter-process communication

188   0   0.0 ( 0 )
 نشر من قبل Vi Tran
 تاريخ النشر 2018
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Work package 2 (WP2) aims to develop libraries for energy-efficient inter-process communication and data sharing on the EXCESS platforms. The Deliverable D2.4 reports on the final prototype of programming abstractions for energy-efficient inter- process communication. Section 1 is the updated overview of the prototype of programming abstraction and devised power/energy models. The Section 2-6 contain the latest results of the four studies: i) GreenBST, a energy-efficient and concurrent search tree (cf. Section 2) ii) Customization methodology for implementation of streaming aggregation in embedded systems (cf. Section 3) iii) Energy Model on CPU for Lock-free Data-structures in Dynamic Environments (cf. Section 4.10) iv) A General and Validated Energy Complexity Model for Multithreaded Algorithms (cf. Section 5)



قيم البحث

اقرأ أيضاً

This deliverable reports the results of white-box methodologies and early results of the first prototype of libraries and programming abstractions as available by project month 18 by Work Package 2 (WP2). It reports i) the latest results of Task 2.2 on white-box methodologies, programming abstractions and libraries for developing energy-efficient data structures and algorithms and ii) the improved results of Task 2.1 on investigating and modeling the trade-off between energy and performance of concurrent data structures and algorithms. The work has been conducted on two main EXCESS platforms: Intel platforms with recent Intel multicore CPUs and Movidius Myriad1 platform. Regarding white-box methodologies, we have devised new relaxed cache-oblivious models and proposed a new power model for Myriad1 platform and an energy model for lock-free queues on CPU platforms. For Myriad1 platform, the im- proved model now considers both computation and data movement cost as well as architecture and application properties. The model has been evaluated with a set of micro-benchmarks and application benchmarks. For Intel platforms, we have generalized the model for concurrent queues on CPU platforms to offer more flexibility according to the workers calling the data structure (parallel section sizes of enqueuers and dequeuers are decoupled). Regarding programming abstractions and libraries, we have continued investigat- ing the trade-offs between energy consumption and performance of data structures such as concurrent queues and concurrent search trees based on the early results of Task 2.1.The preliminary results show that our concurrent trees are faster and more energy efficient than the state-of-the-art on commodity HPC and embedded platforms.
Despite the stringent requirements of a real-time system, the reliance of the Robot Operating System (ROS) on the loopback network interface imposes a considerable overhead on the transport of high bandwidth data, while the nodelet package, which is an efficient mechanism for intra-process communication, does not address the problem of efficient local inter-process communication (IPC). To remedy this, we propose a novel integration into ROS of smart pointers and synchronisation primitives stored in shared memory. These obey the same semantics and, more importantly, exhibit the same performance as their C++ standard library counterparts, making them preferable to other local IPC mechanisms. We present a series of benchmarks for our mechanism - which we call LOT (Low Overhead Transport) - and use them to assess its performance on realistic data loads based on Fives Autonomous Vehicle (AV) system, and extend our analysis to the case where multiple ROS nodes are running in Docker containers. We find that our mechanism performs up to two orders of magnitude better than the standard IPC via local loopback. Finally, we apply industry-standard profiling techniques to explore the hotspots of code running in both user and kernel space, comparing our implementation against alternatives.
Applications in many domains require processing moving object trajectories. In this work, we focus on a trajectory similarity search that finds all trajectories within a given distance of a query trajectory over a time interval, which we call the dis tance threshold similarity search. We develop three indexing strategies with spatial, temporal and spatiotemporal selectivity for the GPU that differ significantly from indexes suitable for the CPU, and show the conditions under which each index achieves good performance. Furthermore, we show that the GPU implementations outperform multithreaded CPU implementations in a range of experimental scenarios, making the GPU an attractive technology for processing moving object trajectories. We test our implementations on two synthetic and one real-world dataset of a galaxy merger.
99 - Dominic Orchard 2016
PLACES 2016 (full title: Programming Language Approaches to Concurrency- and Communication-Centric Software) is the ninth edition of the PLACES workshop series. After the first PLACES, which was affiliated to DisCoTec in 2008, the workshop has been p art of ETAPS every year since 2009 and is now an established part of the ETAPS satellite events. PLACES 2016 was held on 8th April in Eindhoven, The Netherlands. The workshop series was started in order to promote the application of novel programming language ideas to the increasingly important problem of developing software for systems in which concurrency and communication are intrinsic aspects. This includes software for both multi-core systems and large-scale distributed and/or service-oriented systems. The scope of PLACES includes new programming language features, whole new programming language designs, new type systems, new semantic approaches, new program analysis techniques, and new implementation mechanisms. This volume consists of the papers accepted for presentation at the workshop.
Programming models for building large-scale distributed applications assist the developer in reasoning about consistency and distribution. However, many of the programming models for weak consistency, which promise the largest scalability gains, have little in the way of evaluation to demonstrate the promised scalability. We present an experience report on the implementation and large-scale evaluation of one of these models, Lasp, originally presented at PPDP `15, which provides a declarative, functional programming style for distributed applications. We demonstrate the scalability of Lasps prototype runtime implementation up to 1024 nodes in the Amazon cloud computing environment. It achieves high scalability by uniquely combining hybrid gossip with a programming model based on convergent computation. We report on the engineering challenges of this implementation and its evaluation, specifically related to operating research prototypes in a production cloud environment.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا