Do you want to publish a course? Click here

Typing Copyless Message Passing

124   0   0.0 ( 0 )
 Added by Luca Padovani
 Publication date 2012
and research's language is English
 Authors Viviana Bono




Ask ChatGPT about the research

We present a calculus that models a form of process interaction based on copyless message passing, in the style of Singularity OS. The calculus is equipped with a type system ensuring that well-typed processes are free from memory faults, memory leaks, and communication errors. The type system is essentially linear, but we show that linearity alone is inadequate, because it leaves room for scenarios where well-typed processes leak significant amounts of memory. We address these problems basing the type system upon an original variant of session types.



rate research

Read More

We present PolySing#, a calculus that models process interaction based on copyless message passing, in the style of Singularity OS. We equip the calculus with a type system that accommodates polymorphic endpoint types, which are a variant of polymorphic session types, and we show that well-typed processes are free from faults, leaks, and communication errors. The type system is essentially linear, although linearity alone may leave room for scenarios where well-typed processes leak memory. We identify a condition on endpoint types that prevents these leaks from occurring.
We view channels as the main form of resources in a message-passing programming paradigm. These channels need to be carefully managed in settings where resources are scarce. To study this problem, we extend the pi-calculus with primitives for channel allocation and deallocation and allow channels to be reused to communicate values of different types. Inevitably, the added expressiveness increases the possibilities for runtime errors. We define a substructural type system which combines uniqueness typing and affine typing to reject these ill-behaved programs.
Comparison of concurrent programming languages and correctness of program transformations in concurrency are the focus of this research. As criterion we use contextual semantics adapted to concurrency, where may -- as well as should -- convergence are observed. We investigate the relation between the synchronous pi-calculus and a core language of Concurrent Haskell (CH). The contextual semantics is on the one hand forgiving with respect to the details of the operational semantics, and on the other hand implies strong requirements for the interplay between the processes after translation. Our result is that CH embraces the synchronous pi-calculus. Our main task is to find and prove correctness of encodings of pi-calculus channels by CHs concurrency primitives, which are MVars. They behave like (blocking) 1-place buffers modelling the shared-memory. The first developed translation uses an extra private MVar for every communication.We also automatically generate and check potentially correct translations that reuse the MVars where one MVar contains the message and two additional MVars for synchronization are used to model the synchronized communication of a single channel in the pi-calculus.Our automated experimental results lead to the conjecture that one additional MVar is insufficient.
Graph neural networks (GNNs) are a powerful inductive bias for modelling algorithmic reasoning procedures and data structures. Their prowess was mainly demonstrated on tasks featuring Markovian dynamics, where querying any associated data structure depends only on its latest state. For many tasks of interest, however, it may be highly beneficial to support efficient data structure queries dependent on previous states. This requires tracking the data structures evolution through time, placing significant pressure on the GNNs latent representations. We introduce Persistent Message Passing (PMP), a mechanism which endows GNNs with capability of querying past state by explicitly persisting it: rather than overwriting node representations, it creates new nodes whenever required. PMP generalises out-of-distribution to more than 2x larger test inputs on dynamic temporal range queries, significantly outperforming GNNs which overwrite states.
Approximate message passing (AMP) is a low-cost iterative parameter-estimation technique for certain high-dimensional linear systems with non-Gaussian distributions. However, AMP only applies to independent identically distributed (IID) transform matrices, but may become unreliable for other matrix ensembles, especially for ill-conditioned ones. To handle this difficulty, orthogonal/vector AMP (OAMP/VAMP) was proposed for general right-unitarily-invariant matrices. However, the Bayes-optimal OAMP/VAMP requires high-complexity linear minimum mean square error estimator. To solve the disadvantages of AMP and OAMP/VAMP, this paper proposes a memory AMP (MAMP), in which a long-memory matched filter is proposed for interference suppression. The complexity of MAMP is comparable to AMP. The asymptotic Gaussianity of estimation errors in MAMP is guaranteed by the orthogonality principle. A state evolution is derived to asymptotically characterize the performance of MAMP. Based on the state evolution, the relaxation parameters and damping vector in MAMP are optimized. For all right-unitarily-invariant matrices, the optimized MAMP converges to OAMP/VAMP, and thus is Bayes-optimal if it has a unique fixed point. Finally, simulations are provided to verify the validity and accuracy of the theoretical results.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا