Do you want to publish a course? Click here

Enumerating Hardware-Software Splits with Program Rewriting

301   0   0.0 ( 0 )
 Added by Gus Henry Smith
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

A core problem in hardware-software codesign is in the sheer size of the design space. Without a set ISA to constrain the hardware-software interface, the design space explodes. This work presents a strategy for managing the massive hardware-software design space within the domain of machine learning inference workloads and accelerators. We first propose EngineIR, a new language for representing machine learning hardware and software in a single program. Then, using equality graphs -- a data structure from the compilers literature -- we suggest a method for efficiently enumerating the design space by performing rewrites over our representation.



rate research

Read More

In recent years the computing landscape has seen an in- creasing shift towards specialized accelerators. Field pro- grammable gate arrays (FPGAs) are particularly promising as they offer significant performance and energy improvements compared to CPUs for a wide class of applications and are far more flexible than fixed-function ASICs. However, FPGAs are difficult to program. Traditional programming models for reconfigurable logic use low-level hardware description languages like Verilog and VHDL, which have none of the pro- ductivity features of modern software development languages but produce very efficient designs, and low-level software lan- guages like C and OpenCL coupled with high-level synthesis (HLS) tools that typically produce designs that are far less efficient. Functional languages with parallel patterns are a better fit for hardware generation because they both provide high-level abstractions to programmers with little experience in hard- ware design and avoid many of the problems faced when gen- erating hardware from imperative languages. In this paper, we identify two optimizations that are important when using par- allel patterns to generate hardware: tiling and metapipelining. We present a general representation of tiled parallel patterns, and provide rules for automatically tiling patterns and gen- erating metapipelines. We demonstrate experimentally that these optimizations result in speedups up to 40x on a set of benchmarks from the data analytics domain.
151 - Horatiu Cirstea 2018
This volume contains the formal proceedings of the 4th International Workshop on Rewriting Techniques for Program Transformations and Evaluation (WPTE 2017), held on 8th September 2017 in Oxford, United Kingdom, and affiliated with the Second International Conference on Formal Structures for Computation and Deduction (FSCD 2017).
Memory disaggregation has attracted great attention recently because of its benefits in efficient memory utilization and ease of management. So far, memory disaggregation research has all taken one of two approaches, building/emulating memory nodes with either regular servers or raw memory devices with no processing power. The former incurs higher monetary cost and face tail latency and scalability limitations, while the latter introduce performance, security, and management problems. Server-based memory nodes and memory nodes with no processing power are two extreme approaches. We seek a sweet spot in the middle by proposing a hardware-based memory disaggregation solution that has the right amount of processing power at memory nodes. Furthermore, we take a clean-slate approach by starting from the requirements of memory disaggregation and designing a memory-disaggregation-native system. We propose a hardware-based disaggregated memory system, Clio, that virtualizes and manages disaggregated memory at the memory node. Clio includes a new hardware-based virtual memory system, a customized network system, and a framework for computation offloading. In building Clio, we not only co-design OS functionalities, hardware architecture, and the network system, but also co-design the compute node and memory node. We prototyped Clios memory node with FPGA and implemented its client-node functionalities in a user-space library. Clio achieves 100 Gbps throughput and an end-to-end latency of 2.5 us at median and 3.2 us at the 99th percentile. Clio scales much better and has orders of magnitude lower tail latency than RDMA, and it has 1.1x to 3.4x energy saving compared to CPU-based and SmartNIC-based disaggregated memory systems and is 2.7x faster than software-based SmartNIC solutions.
99 - Dominic Orchard 2016
PLACES 2016 (full title: Programming Language Approaches to Concurrency- and Communication-Centric Software) is the ninth edition of the PLACES workshop series. After the first PLACES, which was affiliated to DisCoTec in 2008, the workshop has been part of ETAPS every year since 2009 and is now an established part of the ETAPS satellite events. PLACES 2016 was held on 8th April in Eindhoven, The Netherlands. The workshop series was started in order to promote the application of novel programming language ideas to the increasingly important problem of developing software for systems in which concurrency and communication are intrinsic aspects. This includes software for both multi-core systems and large-scale distributed and/or service-oriented systems. The scope of PLACES includes new programming language features, whole new programming language designs, new type systems, new semantic approaches, new program analysis techniques, and new implementation mechanisms. This volume consists of the papers accepted for presentation at the workshop.
Personalized PageRank (PPR) is a graph algorithm that evaluates the importance of the surrounding nodes from a source node. Widely used in social network related applications such as recommender systems, PPR requires real-time responses (latency) for a better user experience. Existing works either focus on algorithmic optimization for improving precision while neglecting hardware implementations or focus on distributed global graph processing on large-scale systems for improving throughput rather than response time. Optimizing low-latency local PPR algorithm with a tight memory budget on edge devices remains unexplored. In this work, we propose a memory-efficient, low-latency PPR solution, namely MeLoPPR, with largely reduced memory requirement and a flexible trade-off between latency and precision. MeLoPPR is composed of stage decomposition and linear decomposition and exploits the node score sparsity: Through stage and linear decomposition, MeLoPPR breaks the computation on a large graph into a set of smaller sub-graphs, that significantly saves the computation memory; Through sparsity exploitation, MeLoPPR selectively chooses the sub-graphs that contribute the most to the precision to reduce the required computation. In addition, through software/hardware co-design, we propose a hardware implementation on a hybrid CPU and FPGA accelerating platform, that further speeds up the sub-graph computation. We evaluate the proposed MeLoPPR on memory-constrained devices including a personal laptop and Xilinx Kintex-7 KC705 FPGA using six real-world graphs. First, MeLoPPR demonstrates significant memory saving by 1.5x to 13.4x on CPU and 73x to 8699x on FPGA. Second, MeLoPPR allows flexible trade-offs between precision and execution time: when the precision is 80%, the speedup on CPU is up to 15x and up to 707x on FPGA; when the precision is around 90%, the speedup is up to 70x on FPGA.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا