ترغب بنشر مسار تعليمي؟ اضغط هنا

The isolation level Multiversion Read Committed (RC), offered by many database systems, is known to trade consistency for increased transaction throughput. Sometimes, transaction workloads can be safely executed under RC obtaining the perfect isolati on of serializability at the lower cost of RC. To identify such cases, we introduce an expressive model of transaction programs to better reason about the serializability of transactional workloads. We develop tractable algorithms to decide whether any possible schedule of a workload executed under RC is serializable (referred to as the robustness problem). Our approach yields robust subsets that are larger than those identified by previous methods. We provide experimental evidence that workloads that are robust against RC can be evaluated faster under RC compared to stronger isolation levels. We discuss techniques for making workloads robust against RC by promoting selective read operations to updates. Depending on the scenario, the performance improvements can be considerable. Robustness testing and safely executing transactions under the lower isolation level RC can therefore provide a direct way to increase transaction throughput without changing DBMS internals.
In an $r$-uniform hypergraph on $n$ vertices a tight Hamilton cycle consists of $n$ edges such that there exists a cyclic ordering of the vertices where the edges correspond to consecutive segments of $r$ vertices. We provide a first deterministic po lynomial time algorithm, which finds a.a.s. tight Hamilton cycles in random $r$-uniform hypergraphs with edge probability at least $C log^3n/n$. Our result partially answers a question of Dudek and Frieze [Random Structures & Algorithms 42 (2013), 374-385] who proved that tight Hamilton cycles exists already for $p=omega(1/n)$ for $r=3$ and $p=(e + o(1))/n$ for $rge 4$ using a second moment argument. Moreover our algorithm is superior to previous results of Allen, Bottcher, Kohayakawa and Person [Random Structures & Algorithms 46 (2015), 446-465] and Nenadov and v{S}koric [arXiv:1601.04034] in various ways: the algorithm of Allen et al. is a randomised polynomial time algorithm working for edge probabilities $pge n^{-1+varepsilon}$, while the algorithm of Nenadov and v{S}koric is a randomised quasipolynomial time algorithm working for edge probabilities $pge Clog^8n/n$.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا