No Arabic abstract
JavaScript Object Notation or JSON is a ubiquitous data exchange format on the Web. Ingesting JSON documents can become a performance bottleneck due to the sheer volume of data. We are thus motivated to make JSON parsing as fast as possible. Despite the maturity of the problem of JSON parsing, we show that substantial speedups are possible. We present the first standard-compliant JSON parser to process gigabytes of data per second on a single core, using commodity processors. We can use a quarter or fewer instructions than a state-of-the-art reference parser like RapidJSON. Unlike other validating parsers, our software (simdjson) makes extensive use of Single Instruction, Multiple Data (SIMD) instructions. To ensure reproducibility, simdjson is freely available as open-source software under a liberal license.
With disks and networks providing gigabytes per second, parsing decimal numbers from strings becomes a bottleneck. We consider the problem of parsing decimal numbers to the nearest binary floating-point value. The general problem requires variable-precision arithmetic. However, we need at most 17 digits to represent 64-bit standard floating-point numbers (IEEE 754). Thus we can represent the decimal significand with a single 64-bit word. By combining the significand and precomputed tables, we can compute the nearest floating-point number using as few as one or two 64-bit multiplications. Our implementation can be several times faster than conventional functions present in standard C libraries on modern 64-bit systems (Intel, AMD, ARM and POWER9). Our work is available as open source software used by major systems such as Apache Arrow and Yandex ClickHouse. The Go standard library has adopted a version of our approach.
Knowledge Graphs (KGs) have emerged as the de-facto standard for modeling and querying datasets with a graph-like structure in the Semantic Web domain. Our focus is on the performance challenges associated with querying KGs. We developed three informationally equivalent JSON-based representations for KGs, namely, Subject-based Name/Value (JSON-SNV), Documents of Triples (JSON-DT), and Chain-based Name/Value (JSON-CNV). We analyzed the effects of these representations on query performance by storing them on two prominent document-based Data Management Systems (DMSs), namely, MongoDB and Couchbase and executing a set of benchmark queries over them. We also compared the execution times with row-store Virtuoso, column-store Virtuoso, and mbox{Blazegraph} as three major DMSs with different architectures (aka, RDF-stores). Our results indicate that the representation type has a significant performance impact on query execution. For instance, the JSON-SNV outperforms others by nearly one order of magnitude to execute subject-subject join queries. This and the other results presented in this paper can assist in more accurate benchmarking of the emerging DMSs.
Frequent Subgraph Mining (FSM) is the key task in many graph mining and machine learning applications. Numerous systems have been proposed for FSM in the past decade. Although these systems show good performance for small patterns (with no more than four vertices), we found that they have difficulty in mining larger patterns. In this work, we propose a novel two-vertex exploration strategy to accelerate the mining process. Compared with the single-vertex exploration adopted by previous systems, our two-vertex exploration avoids the large memory consumption issue and significantly reduces the memory access overhead. We further enhance the performance through an index-based quick pattern technique that reduces the overhead of isomorphism checks, and a subgraph sampling technique that mitigates the issue of subgraph explosion. The experimental results show that our system achieves significant speedups against the state-of-the-art graph pattern mining systems and supports larger pattern mining tasks that none of the existing systems can handle.
Energy costs are quickly rising in large-scale data centers and are soon projected to overtake the cost of hardware. As a result, data center operators have recently started turning into using more energy-friendly hardware. Despite the growing body of research in power management techniques, there has been little work to date on energy efficiency from a data management software perspective. In this paper, we argue that hardware-only approaches are only part of the solution, and that data management software will be key in optimizing for energy efficiency. We discuss the problems arising from growing energy use in data centers and the trends that point to an increasing set of opportunities for software-level optimizations. Using two simple experiments, we illustrate the potential of such optimizations, and, motivated by these examples, we discuss general approaches for reducing energy waste. Lastly, we point out existing places within database systems that are promising for energy-efficiency optimizations and urge the data management systems community to shift focus from performance-oriented research to energy-efficient computing.
Most of the unsupervised dependency parsers are based on first-order probabilistic generative models that only consider local parent-child information. Inspired by second-order supervised dependency parsing, we proposed a second-order extension of unsupervised neural dependency models that incorporate grandparent-child or sibling information. We also propose a novel design of the neural parameterization and optimization methods of the dependency models. In second-order models, the number of grammar rules grows cubically with the increase of vocabulary size, making it difficult to train lexicalized models that may contain thousands of words. To circumvent this problem while still benefiting from both second-order parsing and lexicalization, we use the agreement-based learning framework to jointly train a second-order unlexicalized model and a first-order lexicalized model. Experiments on multiple datasets show the effectiveness of our second-order models compared with recent state-of-the-art methods. Our joint model achieves a 10% improvement over the previous state-of-the-art parser on the full WSJ test set