No Arabic abstract
This is an attempt to illustrate the glorious history of logical foundations and to discuss the uncertain future.
Inference systems are a widespread framework used to define possibly recursive predicates by means of inference rules. They allow both inductive and coinductive interpretations that are fairly well-studied. In this paper, we consider a middle way interpretation, called regular, which combines advantages of both approaches: it allows non-well-founded reasoning while being finite. We show that the natural proof-theoretic definition of the regular interpretation, based on regular trees, coincides with a rational fixed point. Then, we provide an equivalent inductive characterization, which leads to an algorithm which looks for a regular derivation of a judgment. Relying on these results, we define proof techniques for regular reasoning: the regular coinduction principle, to prove completeness, and an inductive technique to prove soundness, based on the inductive characterization of the regular interpretation. Finally, we show the regular approach can be smoothly extended to inference systems with corules, a recently introduced, generalised framework, which allows one to refine the coinductive interpretation, proving that also this flexible regular interpretation admits an equivalent inductive characterisation.
I am an industrial mathematician. When asked to identify my profession or academic field of study, this is the most concise answer I can provide. However, this seemingly straightforward statement is commonly greeted by a blank stare or an uncomfortable silence, regardless of whether I am speaking to a fellow mathematician or a non-mathematician. I usually follow up with the clarification: I am an applied mathematician who derives much of my inspiration from the study of industrial problems that I encounter through collaborations with companies. This dispels some confusion, but unfortunately still leaves a great deal open to interpretation owing to the vagueness of the words mathematics, industry and company, each of which covers an extremely broad range of scientific or socio-economic activity. To those academics who actually work in the field of industrial mathematics (and whose perspective referred to in the title is the focus of this article) this ambiguity is familiar and untroubling. However, for anyone less acquainted with the work of industrial mathematicians, some clarification is desirable especially for anyone who might be considering entering the field. This essay therefore aims to shed light upon the nature of research being done at the interface between mathematics and industry, paying particular attention to the following questions: What is industrial mathematics? Where is industrial mathematics? How does one do industrial mathematics? Why (or more precisely, what value is there in doing) industrial mathematics? I will attempt to answer these questions by means of several case studies drawn from my own experience in tackling mathematical problems from industry.
For the occasion of the official retirement of Henny Lamers, a meeting was held to celebrate Hennys contribution to mass loss from stars and stellar clusters. Stellar mass loss is crucial for understanding the life and death of massive stars, as well as their environments. Henny has made important contributions to many aspects of our understanding of hot-star winds. Here, the most dominant aspects of the stellar part of the meeting: (i) O star wind clumping, (ii) mass loss near the Eddington limit, and (iii) and the driving of Wolf-Rayet winds, are highlighted.
Bisimulation metrics provide a robust and accurate approach to study the behavior of nondeterministic probabilistic processes. In this paper, we propose a logical characterization of bisimulation metrics based on a simple probabilistic variant of the Hennessy-Milner logic. Our approach is based on the novel notions of mimicking formulae and distance between formulae. The former are a weak version of the well known characteristic formulae and allow us to characterize also (ready) probabilistic simulation and probabilistic bisimilarity. The latter is a 1-bounded pseudometric on formulae that mirrors the Hausdorff and Kantorovich lifting the defining bisimilarity pseudometric. We show that the distance between two processes equals the distance between their own mimicking formulae.
In this paper we continue our research line on logical characterizations of behavioral metrics obtained from the definition of a metric over the set of logical properties of interest. This time we provide a characterization of both strong and weak trace metric on nondeterministic probabilistic processes, based on a minimal boolean logic L which we prove to be powerful enough to characterize strong and weak probabilistic trace equivalence. Moreover, we also prove that our characterization approach can be restated in terms of a more classic probabilistic L-model checking problem.