Do you want to publish a course? Click here

Higher-Order Process Modeling: Product-Lining, Variability Modeling and Beyond

157   0   0.0 ( 0 )
 Added by EPTCS
 Publication date 2013
and research's language is English




Ask ChatGPT about the research

We present a graphical and dynamic framework for binding and execution of business) process models. It is tailored to integrate 1) ad hoc processes modeled graphically, 2) third party services discovered in the (Inter)net, and 3) (dynamically) synthesized process chains that solve situation-specific tasks, with the synthesis taking place not only at design time, but also at runtime. Key to our approach is the introduction of type-safe stacked second-order execution contexts that allow for higher-order process modeling. Tamed by our underlying strict service-oriented notion of abstraction, this approach is tailored also to be used by application experts with little technical knowledge: users can select, modify, construct and then pass (component) processes during process execution as if they were data. We illustrate the impact and essence of our framework along a concrete, realistic (business) process modeling scenario: the development of Springers browser-based Online Conference Service (OCS). The most advanced feature of our new framework allows one to combine online synthesis with the integration of the synthesized process into the running application. This ability leads to a particularly flexible way of implementing self-adaption, and to a particularly concise and powerful way of achieving variability not only at design time, but also at runtime.



rate research

Read More

Program representation learning is a fundamental task in software engineering applications. With the availability of big code and the development of deep learning techniques, various program representation learning models have been proposed to understand the semantic properties of programs and applied on different software engineering tasks. However, no previous study has comprehensively assessed the generalizability of these deep models on different tasks, so that the pros and cons of the models are unclear. In this experience paper, we try to bridge this gap by systemically evaluating the performance of eight program representation learning models on three common tasks, where six models are based on abstract syntax trees and two models are based on plain text of source code. We kindly explain the criteria for selecting the models and tasks, as well as the method for enabling end-to-end learning in each task. The results of performance evaluation show that they perform diversely in each task and the performance of the AST-based models is generally unstable over different tasks. In order to further explain the results, we apply a prediction attribution technique to find what elements are captured by the models and responsible for the predictions in each task. Based on the findings, we discuss some general principles for better capturing the information in the source code, and hope to inspire researchers to improve program representation learning methods for software engineering tasks.
We consider the problem of flexible modeling of higher order Markov chains when an upper bound on the order of the chain is known but the true order and nature of the serial dependence are unknown. We propose Bayesian nonparametric methodology based on conditional tensor factorizations, which can characterize any transition probability with a specified maximal order. The methodology selects the important lags and captures higher order interactions among the lags, while also facilitating calculation of Bayes factors for a variety of hypotheses of interest. We design efficient Markov chain Monte Carlo algorithms for posterior computation, allowing for uncertainty in the set of important lags to be included and in the nature and order of the serial dependence. The methods are illustrated using simulation experiments and real world applications.
We show that the techniques for resource control that have been developed in the so-called light logics can be fruitfully applied also to process algebras. In particular, we present a restriction of Higher-Order pi-calculus inspired by Soft Linear Logic. We prove that any soft process terminates in polynomial time. We argue that the class of soft processes may be naturally enlarged so that interesting processes are expressible, still maintaining the polynomial bound on executions.
129 - Fabrizio Montesi 2018
Classical Processes (CP) is a calculus where the proof theory of classical linear logic types communicating processes with mobile channels, a la pi-calculus. Its construction builds on a recent propositions as types correspondence between session types and propositions in linear logic. Desirable properties such as type preservation under reductions and progress come for free from the metatheory of linear logic. We contribute to this research line by extending CP with code mobility. We generalise classical linear logic to capture higher-order (linear) reasoning on proofs, which yields a logical reconstruction of (a variant of) the Higher-Order pi-calculus (HOpi). The resulting calculus is called Classical Higher-Order Processes (CHOP). We explore the metatheory of CHOP, proving that its semantics enjoys type preservation and progress (terms do not get stuck). We also illustrate the expressivity of CHOP through examples, derivable syntax sugar, and an extension to multiparty sessions. Lastly, we define a translation from CHOP to CP, which encodes mobility of process code into reference passing.
Bi-linear feature learning models, like the gated autoencoder, were proposed as a way to model relationships between frames in a video. By minimizing reconstruction error of one frame, given the previous frame, these models learn mapping units that encode the transformations inherent in a sequence, and thereby learn to encode motion. In this work we extend bi-linear models by introducing higher-order mapping units that allow us to encode transformations between frames and transformations between transformations. We show that this makes it possible to encode temporal structure that is more complex and longer-range than the structure captured within standard bi-linear models. We also show that a natural way to train the model is by replacing the commonly used reconstruction objective with a prediction objective which forces the model to correctly predict the evolution of the input multiple steps into the future. Learning can be achieved by back-propagating the multi-step prediction through time. We test the model on various temporal prediction tasks, and show that higher-order mappings and predictive training both yield a significant improvement over bi-linear models in terms of prediction accuracy.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا