Do you want to publish a course? Click here

No more, no less - A formal model for serverless computing

52   0   0.0 ( 0 )
 Added by Saverio Giallorenzo
 Publication date 2019
and research's language is English




Ask ChatGPT about the research

Serverless computing, also known as Functions-as-a-Service, is a recent paradigm aimed at simplifying the programming of cloud applications. The idea is that developers design applications in terms of functions, which are then deployed on a cloud infrastructure. The infrastructure takes care of executing the functions whenever requested by remote clients, dealing automatically with distribution and scaling with respect to inbound traffic. While vendors already support a variety of programming languages for serverless computing (e.g. Go, Java, Javascript, Python), as far as we know there is no reference model yet to formally reason on this paradigm. In this paper, we propose the first formal programming model for serverless computing, which combines ideas from both the $lambda$-calculus (for functions) and the $pi$-calculus (for communication). To illustrate our proposal, we model a real-world serverless system. Thanks to our model, we are also able to capture and pinpoint the limitations of current vendor technologies, proposing possible amendments.



rate research

Read More

We present new 144-MHz LOFAR observations of the prototypical `X-shaped radio galaxy NGC 326, which show that the formerly known wings of the radio lobes extend smoothly into a large-scale, complex radio structure. We argue that this structure is most likely the result of hydrodynamical effects in an ongoing group or cluster merger, for which pre-existing X-ray and optical data provide independent evidence. The large-scale radio structure is hard to explain purely in terms of jet reorientation due to the merger of binary black holes, a previously proposed explanation for the inner structure of NGC 326. For this reason, we suggest that the simplest model is one in which the merger-related hydrodynamical processes account for all the source structure, though we do not rule out the possibility that a black hole merger has occurred. Inference of the black hole-black hole merger rate from observations of X-shaped sources should be carried out with caution in the absence of deep, sensitive low-frequency observations. Some X-shaped sources may be signposts of cluster merger activity, and it would be useful to investigate the environments of these objects more generally.
We report on the remarkable evolution in the light curve of a variable star discovered by Hubble (1926) in M33 and classified by him as a Cepheid. Early in the 20th century, the variable, designated as V19, exhibited a 54.7 day period, an intensity-weighted mean B magnitude of 19.59+/-0.23 mag, and a B amplitude of 1.1 mag. Its position in the P-L plane was consistent with the relation derived by Hubble from a total of 35 variables. Modern observations by the DIRECT project show a dramatic change in the properties of V19: its mean B magnitude has risen to 19.08 +/- 0.05 mag and its B amplitude has decreased to less than 0.1 mag. V19 does not appear to be a classical (Population I) Cepheid variable at present, and its nature remains a mystery. It is not clear how frequent such objects are nor how often they could be mistaken for classical Cepheids.
El Ni~no-Southern Oscillation (ENSO) exhibits diverse characteristics in spatial pattern, peak intensity, and temporal evolution. Here we develop a three-region multiscale stochastic model to show that the observed ENSO complexity can be explained by combining intraseasonal, interannual, and decadal processes. The model starts with a deterministic three-region system for the interannual variabilities. Then two stochastic processes of the intraseasonal and decadal variation are incorporated. The model can reproduce not only the general properties of the observed ENSO events, but also the complexity in patterns (e.g., Central Pacific vs. Eastern Pacific events), intensity (e.g., 10-20 year reoccurrence of extreme El Ni~nos), and temporal evolution (e.g., more multi-year La Ni~nas than multi-year El Ni~nos). While conventional conceptual models were typically used to understand the dynamics behind the common properties of ENSO, this model offers a powerful tool to understand and predict ENSO complexity that challenges our understanding of the 21st-century ENSO.
Serverless computing has grown in popularity in recent years, with an increasing number of applications being built on Functions-as-a-Service (FaaS) platforms. By default, FaaS platforms support retry-based fault tolerance, but this is insufficient for programs that modify shared state, as they can unwittingly persist partial sets of updates in case of failures. To address this challenge, we would like atomic visibility of the updates made by a FaaS application. In this paper, we present AFT, an atomic fault tolerance shim for serverless applications. AFT interposes between a commodity FaaS platform and storage engine and ensures atomic visibility of updates by enforcing the read atomic isolation guarantee. AFT supports new protocols to guarantee read atomic isolation in the serverless setting. We demonstrate that aft introduces minimal overhead relative to existing storage engines and scales smoothly to thousands of requests per second, while preventing a significant number of consistency anomalies.
66 - M. Grossi , L. Crippa , A. Aita 2021
Starting from the idea of Quantum Computing which is a concept that dates back to 80s, we come to the present day where we can perform calculations on real quantum computers. This sudden development of technology opens up new scenarios that quickly lead to the desire and the real possibility of integrating this technology into current software architectures. The usage of frameworks that allow computation to be performed directly on quantum hardware poses a series of challenges. This document describes a an architectural framework that addresses the problems of integrating an API exposed Quantum provider in an existing Enterprise architecture and it provides a minimum viable product (MVP) solution that really merges classical quantum computers on a basic scenario with reusable code on GitHub repository. The solution leverages a web-based frontend where user can build and select applications/use cases and simply execute it without any further complication. Every triggered run leverages on multiple backend options, that include a scheduler managing the queuing mechanism to correctly schedule jobs and final results retrieval. The proposed solution uses the up-to-date cloud native technologies (e.g. Cloud Functions, Containers, Microservices) and serves as a general framework to develop multiple applications on the same infrastructure.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا