Do you want to publish a course? Click here

Robustness against Read Committed for Transaction Templates

57   0   0.0 ( 0 )
 Added by Brecht Vandevoort
 Publication date 2021
and research's language is English




Ask ChatGPT about the research

The isolation level Multiversion Read Committed (RC), offered by many database systems, is known to trade consistency for increased transaction throughput. Sometimes, transaction workloads can be safely executed under RC obtaining the perfect isolation of serializability at the lower cost of RC. To identify such cases, we introduce an expressive model of transaction programs to better reason about the serializability of transactional workloads. We develop tractable algorithms to decide whether any possible schedule of a workload executed under RC is serializable (referred to as the robustness problem). Our approach yields robust subsets that are larger than those identified by previous methods. We provide experimental evidence that workloads that are robust against RC can be evaluated faster under RC compared to stronger isolation levels. We discuss techniques for making workloads robust against RC by promoting selective read operations to updates. Depending on the scenario, the performance improvements can be considerable. Robustness testing and safely executing transactions under the lower isolation level RC can therefore provide a direct way to increase transaction throughput without changing DBMS internals.



rate research

Read More

A blockchain is an append-only linked-list of blocks, which is maintained at each participating node. Each block records a set of transactions and their associated metadata. Blockchain transactions act on the identical ledger data stored at each node. Blockchain was first perceived by Satoshi Nakamoto as a peer-to-peer digital-commodity (also known as crypto-currency) exchange system. Blockchains received traction due to their inherent property of immutability-once a block is accepted, it cannot be reverted.
228 - Xiufeng Liu 2014
In data warehousing, Extract-Transform-Load (ETL) extracts the data from data sources into a central data warehouse regularly for the support of business decision-makings. The data from transaction processing systems are featured with the high frequent changes of insertion, update, and deletion. It is challenging for ETL to propagate the changes to the data warehouse, and maintain the change history. Moreover, ETL jobs typically run in a sequential order when processing the data with dependencies, which is not optimal, eg, when processing early-arriving data. In this paper, we propose a two-level data staging ETL for handling transaction data. The proposed method detects the changes of the data from transactional processing systems, identifies the corresponding operation codes for the changes, and uses two staging databases to facilitate the data processing in an ETL process. The proposed ETL provides the one-stop method for fast-changing, slowly-changing and early-arriving data processing.
86 - H. Zhou , J. W. Guo , H. Q. Hu 2019
Transaction logging is an essential constituent to guarantee the atomicity and durability in online transaction processing (OLTP) systems. It always has a considerable impact on performance, especially in an in-memory database system. Conventional implementations of logging rely heavily on a centralized design, which guarantees the correctness of recovery by enforcing a total order of all operations such as log sequence number (LSN) allocation, log persistence, transaction committing and recovering. This strict sequential constraint seriously limits the scalability and parallelism of transaction logging and recovery, especially in the multi-core hardware environment. In this paper, we define recoverability for transaction logging and demonstrate its correctness for crash recovery. Based on recoverability, we propose a recoverable logging scheme named Poplar, which enables scalable and parallel log processing by easing the restrictions. Its main advantages are that (1) Poplar enables the parallel log persistence on multiple storage devices; (2) it replaces the centralized LSN allocation by calculating a partially ordered sequence number in a distributed manner, which allows log records to only track RAW and WAW dependencies among transactions; (3) it only demands transactions with RAW dependencies to be committed in serial order; (4) Poplar can concurrently restore a consistent database state based on the partially constrained logs after a crash. Experimental results show that Poplar scales well with the increase of IO devices and outperforms other logging approaches on both SSDs and emulated non-volatile memory.
In recent years, emerging hardware storage technologies have focused on divergent goals: better performance or lower cost-per-bit of storage. Correspondingly, data systems that employ these new technologies are optimized either to be fast (but expensive) or cheap (but slow). We take a different approach: by combining multiple tiers of fast and low-cost storage technologies within the same system, we can achieve a Pareto-efficient balance between performance and cost-per-bit. This paper presents the design and implementation of PrismDB, a novel log-structured merge tree based key-value store that exploits a full spectrum of heterogeneous storage technologies (from 3D XPoint to QLC NAND). We introduce the notion of read-awareness to log-structured merge trees, which allows hot objects to be pinned to faster storage, achieving better tiering and hot-cold separation of objects. Compared to the standard use of RocksDB on flash in datacenters today, PrismDBs average throughput on heterogeneous storage is 2.3$times$ faster and its tail latency is more than an order of magnitude better, using hardware than is half the cost.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا