Do you want to publish a course? Click here

CARMA: Collective Adaptive Resource-sharing Markovian Agents

65   0   0.0 ( 0 )
 Added by EPTCS
 Publication date 2015
and research's language is English




Ask ChatGPT about the research

In this paper we present CARMA, a language recently defined to support specification and analysis of collective adaptive systems. CARMA is a stochastic process algebra equipped with linguistic constructs specifically developed for modelling and programming systems that can operate in open-ended and unpredictable environments. This class of systems is typically composed of a huge number of interacting agents that dynamically adjust and combine their behaviour to achieve specific goals. A CARMA model, termed a collective, consists of a set of components, each of which exhibits a set of attributes. To model dynamic aggregations, which are sometimes referred to as ensembles, CARMA provides communication primitives that are based on predicates over the exhibited attributes. These predicates are used to select the participants in a communication. Two communication mechanisms are provided in the CARMA language: multicast-based and unicast-based. In this paper, we first introduce the basic principles of CARMA and then we show how our language can be used to support specification with a simple but illustrative example of a socio-technical collective adaptive system.



rate research

Read More

Space and movement through space play an important role in many collective adaptive systems (CAS). CAS consist of multiple components interacting to achieve some goal in a system or environment that can change over time. When these components operate in space, then their behaviour can be affected by where they are located in that space. Examples include the possibility of communication between two components located at different points, and rates of movement of a component that may be affected by location. The CARMA language and its associated software tools can be used to model such systems. In particular, a graphical editor for CARMA allows for the specification of spatial structure and generation of templates that can be used in a CARMA model with space. We demonstrate the use of this tool to experiment with a model of pedestrian movement over a network of paths.
93 - Sanjiva Prasad 2016
Based on the two observations that diverse applications perform better on different multicore architectures, and that different phases of an application may have vastly different resource requirements, Pal et al. proposed a novel reconfigurable hardware approach for executing multithreaded programs. Instead of mapping a concurrent program to a fixed architecture, the architecture adaptively reconfigures itself to meet the applications concurrency and communication requirements, yielding significant improvements in performance. Based on our earlier abstract operational framework for multicore execution with hierarchical memory structures, we describe execution of multithreaded programs on reconfigurable architectures that support a variety of clustered configurations. Such reconfiguration may not preserve the semantics of programs due to the possible introduction of race conditions arising from concurrent accesses to shared memory by threads running on the different cores. We present an intuitive partial ordering notion on the cluster configurations, and show that the semantics of multithreaded programs is always preserved for reconfigurations upward in that ordering, whereas semantics preservation for arbitrary reconfigurations can be guaranteed for well-synchronised programs. We further show that a simple approximate notion of efficiency of execution on the different configurations can be obtained using the notion of amortised bisimulations, and extend it to dynamic reconfiguration.
Python has become the de facto language for scientific computing. Programming in Python is highly productive, mainly due to its rich science-oriented software ecosystem built around the NumPy module. As a result, the demand for Python support in High Performance Computing (HPC) has skyrocketed. However, the Python language itself does not necessarily offer high performance. In this work, we present a workflow that retains Pythons high productivity while achieving portable performance across different architectures. The workflows key features are HPC-oriented language extensions and a set of automatic optimizations powered by a data-centric intermediate representation. We show performance results and scaling across CPU, GPU, FPGA, and the Piz Daint supercomputer (up to 23,328 cores), with 2.47x and 3.75x speedups over previous-best solutions, first-ever Xilinx and Intel FPGA results of annotated Python, and up to 93.16% scaling efficiency on 512 nodes.
Federated Learning is a new learning scheme for collaborative training a shared prediction model while keeping data locally on participating devices. In this paper, we study a new model of multiple federated learning services at the multi-access edge computing server. Accordingly, the sharing of CPU resources among learning services at each mobile device for the local training process and allocating communication resources among mobile devices for exchanging learning information must be considered. Furthermore, the convergence performance of different learning services depends on the hyper-learning rate parameter that needs to be precisely decided. Towards this end, we propose a joint resource optimization and hyper-learning rate control problem, namely MS-FEDL, regarding the energy consumption of mobile devices and overall learning time. We design a centralized algorithm based on the block coordinate descent method and a decentralized JP-miADMM algorithm for solving the MS-FEDL problem. Different from the centralized approach, the decentralized approach requires many iterations to obtain but it allows each learning service to independently manage the local resource and learning process without revealing the learning service information. Our simulation results demonstrate the convergence performance of our proposed algorithms and the superior performance of our proposed algorithms compared to the heuristic strategy.
Collective adaptive systems are new emerging computational systems consisting of a large number of interacting components and featuring complex behaviour. These systems are usually distributed, heterogeneous, decentralised and interdependent, and are operating in dynamic and possibly unpredictable environments. Finding ways to understand and design these systems and, most of all, to model the interactions of their components, is a difficult but important endeavour. In this article we propose a language-based approach for programming the interactions of collective-adaptive systems by relying on attribute-based communication; a paradigm that permits a group of partners to communicate by considering their run-time properties and capabilities. We introduce AbC, a foundational calculus for attribute-based communication and show how its linguistic primitives can be used to program a complex and sophisticated variant of the well-known problem of Stable Allocation in Content Delivery Networks. Also other interesting case studies, from the realm of collective-adaptive systems, are considered. We also illustrate the expressive power of attribute-based communication by showing the natural encoding of other existing communication paradigms into AbC.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا