No Arabic abstract
Writing high-performance image processing code is challenging and labor-intensive. The Halide programming language simplifies this task by decoupling high-level algorithms from schedules which optimize their implementation. However, even with this abstraction, it is still challenging for Halide programmers to understand complicated scheduling strategies and productively write valid, optimized schedules. To address this, we propose a programming support method called guided optimization. Guided optimization provides programmers a set of valid optimization options and interactive feedback about their current choices, which enables them to comprehend and efficiently optimize image processing code without the time-consuming trial-and-error process of traditional text editors. We implemented a proof-of-concept system, Roly-poly, which integrates guided optimization, program visualization, and schedule cost estimation to support the comprehension and development of efficient Halide image processing code. We conducted a user study with novice Halide programmers and confirmed that Roly-poly and its guided optimization was informative, increased productivity, and resulted in higher-performing schedules in less time.
The Large Synoptic Survey Telescope (LSST) is an ambitious astronomical survey with a similarly ambitious Data Management component. Data Management for LSST includes processing on both nightly and yearly cadences to generate transient alerts, deep catalogs of the static sky, and forced photometry light-curves for billions of objects at hundreds of epochs, spanning at least a decade. The algorithms running in these pipelines are individually sophisticated and interact in subtle ways. This paper provides an overview of those pipelines, focusing more on those interactions than the details of any individual algorithm.
The performance of objective image quality assessment (IQA) models has been evaluated primarily by comparing model predictions to human quality judgments. Perceptual datasets gathered for this purpose have provided useful benchmarks for improving IQA methods, but their heavy use creates a risk of overfitting. Here, we perform a large-scale comparison of IQA models in terms of their use as objectives for the optimization of image processing algorithms. Specifically, we use eleven full-reference IQA models to train deep neural networks for four low-level vision tasks: denoising, deblurring, super-resolution, and compression. Subjective testing on the optimized images allows us to rank the competing models in terms of their perceptual performance, elucidate their relative advantages and disadvantages in these tasks, and propose a set of desirable properties for incorporation into future IQA models.
In recent years, a wide variety of automated machine learning (AutoML) methods have been proposed to search and generate end-to-end learning pipelines. While these techniques facilitate the creation of models for real-world applications, given their black-box nature, the complexity of the underlying algorithms, and the large number of pipelines they derive, it is difficult for their developers to debug these systems. It is also challenging for machine learning experts to select an AutoML system that is well suited for a given problem or class of problems. In this paper, we present the PipelineProfiler, an interactive visualization tool that allows the exploration and comparison of the solution space of machine learning (ML) pipelines produced by AutoML systems. PipelineProfiler is integrated with Jupyter Notebook and can be used together with common data science tools to enable a rich set of analyses of the ML pipelines and provide insights about the algorithms that generated them. We demonstrate the utility of our tool through several use cases where PipelineProfiler is used to better understand and improve a real-world AutoML system. Furthermore, we validate our approach by presenting a detailed analysis of a think-aloud experiment with six data scientists who develop and evaluate AutoML tools.
Rust is a low-level programming language known for its unique approach to memory-safe systems programming and for its steep learning curve. To understand what makes Rust difficult to adopt, we surveyed the top Reddit and Hacker News posts and comments about Rust; from these online discussions, we identified three hypotheses about Rusts barriers to adoption. We found that certain key features, idioms, and integration patterns were not easily accessible to new users.
Existing deep models for code tend to be trained on syntactic program representations. We present an alternative, called Neural Attribute Grammars, that exposes the semantics of the target language to the training procedure using an attribute grammar. During training, our model learns to replicate the relationship between the syntactic rules used to construct a program, and the semantic attributes (for example, symbol tables) constructed from the context in which the rules are fired. We implement the approach as a system for conditional generation of Java programs modulo eleven natural requirements. Our experiments show that the system generates constraint-abiding programs with significantly higher frequency than a baseline model trained on syntactic program representations, and also in terms of generation accuracy.