No Arabic abstract
Multitier programming languages reduce the complexity of developing distributed systems by developing the distributed system in a single coherent code base. The compiler or the runtime separate the code for the components of the distributed system, enabling abstraction over low level implementation details such as data representation, serialization and network protocols. Our ScalaLoci language allows developers to declare the different components and their architectural relation at the type level, allowing static reasoning about about distribution and remote communication and guaranteeing static type safety across components. The compiler splits the multitier program into the component-specific code and automatically generates the communication boilerplate. Communication between components can be modeled by declaratively specifying data flows between components using reactive programming. In this paper, we report on the implementation of our design and our experience with embedding our language features into Scala as a host language. We show how a combination of Scalas advanced type level programming and its macro system can be used to enrich the language with new abstractions. We comment on the challenges we encountered and the solutions we developed for our current implementation and outline suggestions for an improved macro system to support the such use cases of embedding of domain-specific abstractions.
Generating multimedia streams, such as in a netradio, is a task which is complex and difficult to adapt to every users needs. We introduce a novel approach in order to achieve it, based on a dedicated high-level functional programming language, called Liquidsoap, for generating, manipulating and broadcasting multimedia streams. Unlike traditional approaches, which are based on configuration files or static graphical interfaces, it also allows the user to build complex and highly customized systems. This language is based on a model for streams and contains operators and constructions, which make it adapted to the generation of streams. The interpreter of the language also ensures many properties concerning the good execution of the stream generation.
While modern software development heavily uses versioned packages, programming languages rarely support the concept
Algorithmic and data refinement are well studied topics that provide a mathematically rigorous approach to gradually introducing details in the implementation of software. Program refinements are performed in the context of some programming language, but mainstream languages lack features for recording the sequence of refinement steps in the program text. To experiment with the combination of refinement, automated verification, and language design, refinement features have been added to the verification-aware programming language Dafny. This paper describes those features and reflects on some initial usage thereof.
I present a new approach to teaching a graduate-level programming languages course focused on using systems programming ideas and languages like WebAssembly and Rust to motivate PL theory. Drawing on students prior experience with low-level languages, the course shows how type systems and PL theory are used to avoid tricky real-world errors that students encounter in practice. I reflect on the curricular design and lessons learned from two years of teaching at Stanford, showing that integrating systems ideas can provide students a more grounded and enjoyable education in programming languages. The curriculum, course notes, and assignments are freely available: http://cs242.stanford.edu/f18/
Though statistical analyses are centered on research questions and hypotheses, current statistical analysis tools are not. Users must first translate their hypotheses into specific statistical tests and then perform API calls with functions and parameters. To do so accurately requires that users have statistical expertise. To lower this barrier to valid, replicable statistical analysis, we introduce Tea, a high-level declarative language and runtime system. In Tea, users express their study design, any parametric assumptions, and their hypotheses. Tea compiles these high-level specifications into a constraint satisfaction problem that determines the set of valid statistical tests, and then executes them to test the hypothesis. We evaluate Tea using a suite of statistical analyses drawn from popular tutorials. We show that Tea generally matches the choices of experts while automatically switching to non-parametric tests when parametric assumptions are not met. We simulate the effect of mistakes made by non-expert users and show that Tea automatically avoids both false negatives and false positives that could be produced by the application of incorrect statistical tests.