ترغب بنشر مسار تعليمي؟ اضغط هنا

Partial Redundancy Elimination using Lazy Code Motion

58   0   0.0 ( 0 )
 نشر من قبل Sandeep Dasgupta
 تاريخ النشر 2019
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Partial Redundancy Elimination (PRE) is a compiler optimization that eliminates expressions that are redundant on some but not necessarily all paths through a program. In this project, we implemented a PRE optimization pass in LLVM and measured results on a variety of applications. We chose PRE because it is a powerful technique that subsumes Common Subexpression Elimination (CSE) and Loop Invariant Code Motion (LICM), and hence has the potential to greatly improve performance.



قيم البحث

اقرأ أيضاً

Debugging lazy functional programs poses serious challenges. In support of the stop, examine, and resume debugging style of imperative languages, some debugging tools abandon lazy evaluation. Other debuggers preserve laziness but present it in a way that may confuse programmers because the focus of evaluation jumps around in a seemingly random manner. In this paper, we introduce a supplemental tool, the algebraic program stepper. An algebraic stepper shows computation as a mathematical calculation. Algebraic stepping could be particularly useful for novice programmers or programmers new to lazy programming. Mathematically speaking, an algebraic stepper renders computation as the standard rewriting sequence of a lazy lambda-calculus. Our novel lazy semantics introduces lazy evaluation as a form of parallel program rewriting. It represents a compromise between Launchburys store-based semantics and a simple, axiomatic description of lazy computation as sharing-via-parameters. Finally, we prove that the steppers run-time machinery correctly reconstructs the standard rewriting sequence.
151 - S. Etalle , J. Mountjoy 2000
The possibility of translating logic programs into functional ones has long been a subject of investigation. Common to the many approaches is that the original logic program, in order to be translated, needs to be well-moded and this has led to the c ommon understanding that these programs can be considered to be the ``functional part of logic programs. As a consequence of this it has become widely accepted that ``complex logical variables, the possibility of a dynamic selection rule, and general properties of non-well-moded programs are exclusive features of logic programs. This is not quite true, as some of these features are naturally found in lazy functional languages. We readdress the old question of what features are exclusive to the logic programming paradigm by defining a simple translation applicable to a wider range of logic programs, and demonstrate that the current circumscription is unreasonably restrictive.
It is a strength of graph-based data formats, like RDF, that they are very flexible with representing data. To avoid run-time errors, program code that processes highly-flexible data representations exhibits the difficulty that it must always include the most general case, in which attributes might be set-valued or possibly not available. The Shapes Constraint Language (SHACL) has been devised to enforce constraints on otherwise random data structures. We present our approach, Type checking using SHACL (TyCuS), for type checking code that queries RDF data graphs validated by a SHACL shape graph. To this end, we derive SHACL shapes from queries and integrate data shapes and query shapes as types into a $lambda$-calculus. We provide the formal underpinnings and a proof of type safety for TyCuS. A programmer can use our method in order to process RDF data with simplified, type checked code that will not encounter run-time errors (with usual exceptions as type checking cannot prevent accessing empty lists).
Programmers currently enjoy access to a very high number of code repositories and libraries of ever increasing size. The ensuing potential for reuse is however hampered by the fact that searching within all this code becomes an increasingly difficult task. Most code search engines are based on syntactic techniques such as signature matching or keyword extraction. However, these techniques are inaccurate (because they basically rely on documentation) and at the same time do not offer very expressive code query languages. We propose a novel approach that focuses on querying for semantic characteristics of code obtained automatically from the code itself. Program units are pre-processed using static analysis techniques, based on abstract interpretation, obtaining safe semantic approximations. A novel, assertion-based code query language is used to express desired semantic characteristics of the code as partial specifications. Relevant code is found by comparing such partial specifications with the inferred semantics for program elements. Our approach is fully automatic and does not rely on user annotations or documentation. It is more powerful and flexible than signature matching because it is parametric on the abstract domain and properties, and does not require type definitions. Also, it reasons with relations between properties, such as implication and abstraction, rather than just equality. It is also more resilient to syntactic code differences. We describe the approach and report on a prototype implementation within the Ciao system. Under consideration for acceptance in TPLP.
When creating a new domain-specific language (DSL) it is common to embed it as a part of a flexible host language, rather than creating it entirely from scratch. The semantics of an embedded DSL (EDSL) is either given directly as a set of functions ( shallow embedding), or an AST is constructed that is later processed (deep embedding). Typically, the deep embedding is used when the EDSL specifies domain-specific optimizations (DSO) in a form of AST transformations. In this paper we show that deep embedding is not necessary to specify most optimizations. We define language semantics as action functions that are executed during parsing. These actions build incrementally a new, arbitrary complex program function. The EDSL designer is able to specify many aspects of the semantics as a runnable code, such as variable scoping rules, custom type checking, arbitrary control flow structures, or DSO. A sufficiently powerful staging mechanism helps assembling the code from different actions, as well as evaluate the semantics in arbitrarily many stages. In the end, we obtain code that is as efficient as one written by hand. We never create any object representation of the code. No external traversing algorithm is used to process the code. All program fragments are functions with their entire semantics embedded within the function bodies. This approach allows reusing the code between EDSL and the host language, as well as combining actions of many different EDSLs.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا