Do you want to publish a course? Click here

A Tighter Real-Time Communication Analysis for Wormhole-Switched Priority-Preemptive NoCs

76   0   0.0 ( 0 )
 Added by Borislav Nikolic
 Publication date 2016
and research's language is English




Ask ChatGPT about the research

Simulations and runtime measurements are some of the methods which can be used to evaluate whether a given NoC-based platform can accommodate application workload and fulfil its timing requirements. Yet, these techniques are often time-consuming, and hence can evaluate only a limited set of scenarios. Therefore, these approaches are not suitable for safety-critical and hard real-time systems, where one of the fundamental requirements is to provide strong guarantees that all timing requirements will always be met, even in the worst-case conditions. For such systems the analytic-based real-time analysis is the only viable approach. In this paper the focus is on the real-time communication analysis for wormhole-switched priority-preemptive NoCs. First, we elaborate on the existing analysis and identify one source of pessimism. Then, we propose an extension to the analysis, which efficiently overcomes this limitation, and allows for a less pessimistic analysis. Finally, through a comprehensive experimental evaluation, we compare the newly proposed approach against the existing one, and also observe how the trends change with different traffic parameters.



rate research

Read More

There are several approaches to analyse the worst-case response times of sporadic packets transmitted over priority-preemptive wormhole networks. In this paper, we provide an overview of the different approaches, discuss their strengths and weaknesses, and propose an approach that captures all effects considered by previous approaches while providing tight yet safe upper bounds for packet response times. We specifically address the problems created by buffering and backpressure in wormhole networks, which amplifies the problem of indirect interference in a way that has not been considered by the early analysis approaches. Didactic examples and large-scale experiments with synthetically generated packet flow sets provide evidence of the strength of the proposed approach.
Priority-aware networks-on-chip (NoCs) are used in industry to achieve predictable latency under different workload conditions. These NoCs incorporate deflection routing to minimize queuing resources within routers and achieve low latency during low traffic load. However, deflected packets can exacerbate congestion during high traffic load since they consume the NoC bandwidth. State-of-the-art analytical models for priority-aware NoCs ignore deflected traffic despite its significant latency impact during congestion. This paper proposes a novel analytical approach to estimate end-to-end latency of priority-aware NoCs with deflection routing under bursty and heavy traffic scenarios. Experimental evaluations show that the proposed technique outperforms alternative approaches and estimates the average latency for real applications with less than 8% error compared to cycle-accurate simulations.
Performance tools for forthcoming heterogeneous exascale platforms must address two principal challenges when analyzing execution measurements. First, measurement of extreme-scale executions generates large volumes of performance data. Second, performance metrics for heterogeneous applications are significantly sparse across code regions. To address these challenges, we developed a novel streaming aggregation approach to post-mortem analysis that employs both shared and distributed memory parallelism to aggregate sparse performance measurements from every rank, thread and GPU stream of a large-scale application execution. Analysis results are stored in a pair of sparse formats designed for efficient access to related data elements, supporting responsive interactive presentation and scalable data analytics. Empirical analysis shows that our implementation of this approach in HPCToolkit effectively processes measurement data from thousands of threads using a fraction of the compute resources employed by the application itself. Our approach is able to perform analysis up to 9.4 times faster and store analysis results 23 times smaller than HPCToolkit, providing a key building block for scalable exascale performance tools.
Graphs are widespread data structures used to model a wide variety of problems. The sheer amount of data to be processed has prompted the creation of a myriad of systems that help us cope with massive scale graphs. The pressure to deliver fast responses to queries on the graph is higher than ever before, as it is demanded by many applications (e.g. online recommendations, auctions, terrorism protection, etc.). In addition, graphs change continuously (so do the real world entities that typically represent). Systems must be ready for both: near real-time and dynamic massive graphs. We survey systems taking their scalability, real-time potential and capability to support dynamic changes to the graph as driving guidelines. The main techniques and limitations are distilled and categorised. The algorithms run on top of graph systems are not ready for prime time dynamism either. Therefore,a short overview on dynamic graph algorithms has also been included.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا