No Arabic abstract
Art and culture, at their best, lie in the act of discovery and exploration. This paper describes Resurrect3D, an open visualization platform for both casual users and domain experts to explore cultural artifacts. To that end, Resurrect3D takes two steps. First, it provides an interactive cultural heritage toolbox, providing not only commonly used tools in cultural heritage such as relighting and material editing, but also the ability for users to create an interactive story: a saved session with annotations and visualizations others can later replay. Second, Resurrect3D exposes a set of programming interfaces to extend the toolbox. Domain experts can develop custom tools that perform artifact-specific visualization and analysis.
Much human and computational effort has aimed to improve how deep reinforcement learning algorithms perform on benchmarks such as the Atari Learning Environment. Comparatively less effort has focused on understanding what has been learned by such methods, and investigating and comparing the representations learned by different families of reinforcement learning (RL) algorithms. Sources of friction include the onerous computational requirements, and general logistical and architectural complications for running Deep RL algorithms at scale. We lessen this friction, by (1) training several algorithms at scale and releasing trained models, (2) integrating with a previous Deep RL model release, and (3) releasing code that makes it easy for anyone to load, visualize, and analyze such models. This paper introduces the Atari Zoo framework, which contains models trained across benchmark Atari games, in an easy-to-use format, as well as code that implements common modes of analysis and connects such models to a popular neural network visualization library. Further, to demonstrate the potential of this dataset and software package, we show initial quantitative and qualitative comparisons between the performance and representations of several deep RL algorithms, highlighting interesting and previously unknown distinctions between them.
Thanks to recent advancements in the technology, eXtended Reality (XR) applications are gaining a lot of momentum, and they will surely become increasingly popular in the next decade. These new applications, however, require a step forward also in terms of models to simulate and analyze this type of traffic sources in modern communication networks, in order to guarantee to the users state of the art performance and Quality of Experience (QoE). Recognizing this need, in this work, we present a novel open-source traffic model, which researchers can use as a starting point both for improvements of the model itself and for the design of optimized algorithms for the transmission of these peculiar data flows. Along with the mathematical model and the code, we also share with the community the traces that we gathered for our study, collected from freely available applications such as Minecraft VR, Google Earth VR, and Virus Popper. Finally, we propose a roadmap for the construction of an end-to-end framework that fills this gap in the current state of the art.
Individual performance metrics are commonly used to compare players from different eras. However, such cross-era comparison is often biased due to significant changes in success factors underlying player achievement rates (e.g. performance enhancing drugs and modern training regimens). Such historical comparison is more than fodder for casual discussion among sports fans, as it is also an issue of critical importance to the multi-billion dollar professional sport industry and the institutions (e.g. Hall of Fame) charged with preserving sports history and the legacy of outstanding players and achievements. To address this cultural heritage management issue, we report an objective statistical method for renormalizing career achievement metrics, one that is particularly tailored for common seasonal performance metrics, which are often aggregated into summary career metrics -- despite the fact that many player careers span different eras. Remarkably, we find that the method applied to comprehensive Major League Baseball and National Basketball Association player data preserves the overall functional form of the distribution of career achievement, both at the season and career level. As such, subsequent re-ranking of the top-50 all-time records in MLB and the NBA using renormalized metrics indicates reordering at the local rank level, as opposed to bulk reordering by era. This local order refinement signals time-independent mechanisms underlying annual and career achievement in professional sports, meaning that appropriately renormalized achievement metrics can be used to compare players from eras with different season lengths, team strategies, rules -- and possibly even different sports.
Multiverse analysis is an approach to data analysis in which all reasonable analytic decisions are evaluated in parallel and interpreted collectively, in order to foster robustness and transparency. However, specifying a multiverse is demanding because analysts must manage myriad variants from a cross-product of analytic decisions, and the results require nuanced interpretation. We contribute Boba: an integrated domain-specific language (DSL) and visual analysis system for authoring and reviewing multiverse analyses. With the Boba DSL, analysts write the shared portion of analysis code only once, alongside local variations defining alternative decisions, from which the compiler generates a multiplex of scripts representing all possible analysis paths. The Boba Visualizer provides linked views of model results and the multiverse decision space to enable rapid, systematic assessment of consequential decisions and robustness, including sampling uncertainty and model fit. We demonstrate Bobas utility through two data analysis case studies, and reflect on challenges and design opportunities for multiverse analysis software.
Recent advances in the area of legal information systems have led to a variety of applications that promise support in processing and accessing legal documents. Unfortunately, these applications have various limitations, e.g., regarding scope or extensibility. Furthermore, we do not observe a trend towards open access in digital libraries in the legal domain as we observe in other domains, e.g., economics of computer science. To improve open access in the legal domain, we present our approach for an open source platform to transparently process and access Legal Open Data. This enables the sustainable development of legal applications by offering a single technology stack. Moreover, the approach facilitates the development and deployment of new technologies. As proof of concept, we implemented six technologies and generated metadata for more than 250,000 German laws and court decisions. Thus, we can provide users of our platform not only access to legal documents, but also the contained information.