No Arabic abstract
There is an ongoing debate in computer science how algorithms should best be studied. Some scholars have argued that experimental evaluations should be conducted, others emphasize the benefits of formal analysis. We believe that this debate less of a question of either-or, because both views can be integrated into an overarching framework. It is the ambition of this paper to develop such a framework of algorithm engineering with a theoretical foundation in the philosophy of science. We take the empirical nature of algorithm engineering as a starting point. Our theoretical framework builds on three areas discussed in the philosophy of science: ontology, epistemology and methodology. In essence, ontology describes algorithm engineering as being concerned with algorithmic problems, algorithmic tasks, algorithm designs and algorithm implementations. Epistemology describes the body of knowledge of algorithm engineering as a collection of prescriptive and descriptive knowledge, residing in World 3 of Poppers Three Worlds model. Methodology refers to the steps how we can systematically enhance our knowledge of specific algorithms. In this context, we identified seven validity concerns and discuss how researchers can respond to falsification. Our framework has important implications for researching algorithms in various areas of computer science.
Given two locations $s$ and $t$ in a road network, a distance query returns the minimum network distance from $s$ to $t$, while a shortest path query computes the actual route that achieves the minimum distance. These two types of queries find important applications in practice, and a plethora of solutions have been proposed in past few decades. The existing solutions, however, are optimized for either practical or asymptotic performance, but not both. In particular, the techniques with enhanced practical efficiency are mostly heuristic-based, and they offer unattractive worst-case guarantees in terms of space and time. On the other hand, the methods that are worst-case efficient often entail prohibitive preprocessing or space overheads, which render them inapplicable for the large road networks (with millions of nodes) commonly used in modern map applications. This paper presents {em Arterial Hierarchy (AH)}, an index structure that narrows the gap between theory and practice in answering shortest path and distance queries on road networks. On the theoretical side, we show that, under a realistic assumption, AH answers any distance query in $tilde{O}(log r)$ time, where $r = d_{max}/d_{min}$, and $d_{max}$ (resp. $d_{min}$) is the largest (resp. smallest) $L_infty$ distance between any two nodes in the road network. In addition, any shortest path query can be answered in $tilde{O}(k + log r)$ time, where $k$ is the number of nodes on the shortest path. On the practical side, we experimentally evaluate AH on a large set of real road networks with up to twenty million nodes, and we demonstrate that (i) AH outperforms the state of the art in terms of query time, and (ii) its space and pre-computation overheads are moderate.
Fully dynamic graph algorithms that achieve polylogarithmic or better time per operation use either a hierarchical graph decomposition or random-rank based approach. There are so far two graph properties for which efficient algorithms for both types of data structures exist, namely fully dynamic (Delta + 1) coloring and fully dynamic maximal matching. In this paper we present an extensive experimental study of these two types of algorithms for these two problems together with very simple baseline algorithms to determine which of these algorithms are the fastest. Our results indicate that the data structures used by the different algorithms dominate their performance.
The design of software systems inevitably enacts normative boundaries around the site of intervention. These boundaries are, in part, a reflection of the values, ethics, power, and politics of the situation and the process of design itself. This paper argues that Requirements Engineering (RE) require more robust frameworks and techniques to navigate the values implicit in systems design work. To this end, we present the findings from a case of action research where we employed Critical Systems Heuristics (CSH), a framework from Critical Systems Thinking (CST) during requirements gathering for Homesound, a system to safeguard elderly people living alone while protecting their autonomy. We use categories from CSH to inform expert interviews and reflection, showing how CSH can be simply combined with RE techniques (such as the Volere template) to explore and reveal the value-judgements underlying requirements.
Forecasting has always been at the forefront of decision making and planning. The uncertainty that surrounds the future is both exciting and challenging, with individuals and organisations seeking to minimise risks and maximise utilities. The large number of forecasting applications calls for a diverse set of forecasting methods to tackle real-life challenges. This article provides a non-systematic review of the theory and the practice of forecasting. We provide an overview of a wide range of theoretical, state-of-the-art models, methods, principles, and approaches to prepare, produce, organise, and evaluate forecasts. We then demonstrate how such theoretical concepts are applied in a variety of real-life contexts. We do not claim that this review is an exhaustive list of methods and applications. However, we wish that our encyclopedic presentation will offer a point of reference for the rich work that has been undertaken over the last decades, with some key insights for the future of forecasting theory and practice. Given its encyclopedic nature, the intended mode of reading is non-linear. We offer cross-references to allow the readers to navigate through the various topics. We complement the theoretical concepts and applications covered by large lists of free or open-source software implementations and publicly-available databases.
Linear system solving is one of the main workhorses in applied mathematics. Recently, theoretical computer scientists have contributed sophisticated algorithms for solving linear systems with symmetric diagonally dominant matrices (a class to which Laplacian matrices belong) in provably nearly-linear time. While these algorithms are highly interesting from a theoretical perspective, there are no published results how they perform in practice. With this paper we address this gap. We provide the first implementation of the combinatorial solver by [Kelner et al., STOC 2013], which is particularly appealing for implementation due to its conceptual simplicity. The algorithm exploits that a Laplacian matrix corresponds to a graph; solving Laplacian linear systems amounts to finding an electrical flow in this graph with the help of cycles induced by a spanning tree with the low-stretch property. The results of our comprehensive experimental study are ambivalent. They confirm a nearly-linear running time, but for reasonable inputs the constant factors make the solver much slower than methods with higher asymptotic complexity. One other aspect predicted by theory is confirmed by our findings, though: Spanning trees with lower stretch indeed reduce the solvers running time. Yet, simple spanning tree algorithms perform in practice better than those with a guaranteed low stretch.