No Arabic abstract
We offer a very simple model of how collective memory may form. Agents keep signalling within neighbourhoods, and depending on how many support each signal, some signals win in that neighbourhood. By agents interacting between different neighbourhoods, influence spreads and sometimes, a collective signal emerges. We propose a logic in which we can reason about such emergence of memory and present preliminary technical results on the logic.
Microsofts internal big data analytics platform is comprised of hundreds of thousands of machines, serving over half a million jobs daily, from thousands of users. The majority of these jobs are recurring and are crucial for the companys operation. Although administrators spend significant effort tuning system performance, some jobs inevitably experience slowdowns, i.e., their execution time degrades over previous runs. Currently, the investigation of such slowdowns is a labor-intensive and error-prone process, which costs Microsoft significant human and machine resources, and negatively impacts several lines of businesses. In this work, we present Griffin, a system we built and have deployed in production last year to automatically discover the root cause of job slowdowns. Existing solutions either rely on labeled data (i.e., resolved incidents with labeled reasons for job slowdowns), which is in most cases non-existent or non-trivial to acquire, or on time-series analysis of individual metrics that do not target specific jobs holistically. In contrast, in Griffin we cast the problem to a corresponding regression one that predicts the runtime of a job, and show how the relative contributions of the features used to train our interpretable model can be exploited to rank the potential causes of job slowdowns. Evaluated over historical incidents, we show that Griffin discovers slowdown causes that are consistent with the ones validated by domain-expert engineers, in a fraction of the time required by them.
Norms with sanctions have been widely employed as a mechanism for controlling and coordinating the behavior of agents without limiting their autonomy. The norms enforced in a multi-agent system can be revised in order to increase the likelihood that desirable system properties are fulfilled or that system performance is sufficiently high. In this paper, we provide a preliminary analysis of some types of norm revision: relaxation and strengthening. Furthermore, with the help of some illustrative scenarios, we show the usefulness of norm revision for better satisfying the overall system objectives.
The ability to store continuous variables in the state of a biological system (e.g. a neural network) is critical for many behaviours. Most models for implementing such a memory manifold require hand-crafted symmetries in the interactions or precise fine-tuning of parameters. We present a general principle that we refer to as {it frozen stabilisation}, which allows a family of neural networks to self-organise to a critical state exhibiting memory manifolds without parameter fine-tuning or symmetries. These memory manifolds exhibit a true continuum of memory states and can be used as general purpose integrators for inputs aligned with the manifold. Moreover, frozen stabilisation allows robust memory manifolds in small networks, and this is relevant to debates of implementing continuous attractors with a small number of neurons in light of recent experimental discoveries.
Most modern (classical) programming languages support recursion. Recursion has also been successfully applied to the design of several quantum algorithms and introduced in a couple of quantum programming languages. So, it can be expected that recursion will become one of the fundamental paradigms of quantum programming. Several program logics have been developed for verification of quantum while-programs. However, there are as yet no general methods for reasoning about (mutual) recursive procedures and ancilla quantum data structure in quantum computing (with measurement). We fill the gap in this paper by proposing a parameterized quantum assertion logic and, based on which, designing a quantum Hoare logic for verifying parameterized recursive quantum programs with ancilla data and probabilistic control. The quantum Hoare logic can be used to prove partial, total, and even probabilistic correctness (by reducing to total correctness) of those quantum programs. In particular, two counterexamples for illustrating incompleteness of non-parameterized assertions in verifying recursive procedures, and, one counterexample for showing the failure of reasoning with exact probabilities based on partial correctness, are constructed. The effectiveness of our logic is shown by three main examples -- recursive quantum Markov chain (with probabilistic control), fixed-point Grovers search, and recursive quantum Fourier sampling.
In this work, we consider translating tock-CSP into Timed Automata for UPPAAL to facilitate using UPPAAL in reasoning about temporal specifications of tock-CSP models. The process algebra tock-CSP provides textual notations for modelling discrete-time behaviours, with the support of tools for automatic verification. Similarly, automatic verification of Timed Automata (TA) with a graphical notation is supported by the UPPAAL real-time verification toolbox uppaal. The two modelling approaches, TA and tock-CSP, differ in both modelling and verification approaches, temporal logic and refinement, respectively, as well as their provided facilities for automatic verification. For instance, liveness requirements are difficult to specify with the constructs of tock-CSP, but they are easy to specify and verify in UPPAAL. To take advantage of temporal logic, we translate tock-CSP into TA for uppaal; we have developed a translation technique and its supporting tool. We provide rules for translating tock-CSP into a network of small TAs for capturing the compositional structure of tock-CSP that is not available in TA. For validation, we start with an experimental approach based on finite approximations to trace sets. Then, we explore mathematical proof to establish the correctness of the rules for covering infinite traces.