ترغب بنشر مسار تعليمي؟ اضغط هنا

Execution of NVRAM Programs with Persistent Stack

287   0   0.0 ( 0 )
 نشر من قبل Ilya Kokorin
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Non-Volatile Random Access Memory (NVRAM) is a novel type of hardware that combines the benefits of traditional persistent memory (persistency of data over hardware failures) and DRAM (fast random access). In this work, we describe an algorithm that can be used to execute NVRAM programs and recover the system after a hardware failure while taking the architecture of real-world NVRAM systems into account. Moreover, the algorithm can be used to execute NVRAM-destined programs on commodity persistent hardware, such as hard drives. That allows us to test NVRAM algorithms using only cheap hardware, without having access to the NVRAM. We report the usage of our algorithm to implement and test NVRAM CAS algorithm.



قيم البحث

اقرأ أيضاً

Flat combining (FC) is a synchronization paradigm in which a single thread, holding a global lock, collects requests by multiple threads for accessing a concurrent data structure and applies their combined requests to it. Although FC is sequential, i t significantly reduces synchronization overheads and cache invalidations and thus often provides better performance than that of lock-free implementations. The recent emergence of non-volatile memory (NVM) technologies increases the interest in the development of persistent (a.k.a. durable or recoverable) objects. These are objects that are able to recover from system failures and ensure consistency by retaining their state in NVM and fixing it, if required, upon recovery. Of particular interest are detectable objects that, in addition to ensuring consistency, allow recovery code to infer if a failed operation took effect before the crash and, if it did, obtain its response. In this work, we present the first FC-based persistent object. Specifically, we introduce a detectable FC-based implementation of a concurrent LIFO stack object. Our empirical evaluation establishes that thanks to the usage of flat combining, the novel stack algorithm requires a much smaller number of costly persistence instructions than competing algorithms and is therefore able to significantly outperform them.
Coping with the intermittency of renewables is a fundamental challenge, with load shifting and grid-scale storage as key responses. We propose Information Batteries (IB), in which energy is stored in the form of information -- specifically, the resul ts of completed computational tasks. Information Batteries thus provide storage through speculative load shifting, anticipating computation that will be performed in the future. We take a distributed systems perspective, and evaluate the extent to which an IB storage system can be made practical through augmentation of compiler toolchains, key-value stores, and other important elements in modern hyper-scale compute. In particular, we implement one specific IB prototype by augmenting the Rust compiler to enable transparent function-level precomputation and caching. We evaluate the overheads this imposes, along with macro-level job prediction and power prediction. We also evaluate the space of operation for an IB system, to identify the best case efficiency of any IB system for a given power and compute regime.
A local algorithm is a distributed algorithm where each node must operate solely based on the information that was available at system startup within a constant-size neighbourhood of the node. We study the applicability of local algorithms to max-min LPs where the objective is to maximise $min_k sum_v c_{kv} x_v$ subject to $sum_v a_{iv} x_v le 1$ for each $i$ and $x_v ge 0$ for each $v$. Here $c_{kv} ge 0$, $a_{iv} ge 0$, and the support sets $V_i = {v : a_{iv} > 0 }$, $V_k = {v : c_{kv}>0 }$, $I_v = {i : a_{iv} > 0 }$ and $K_v = {k : c_{kv} > 0 }$ have bounded size. In the distributed setting, each agent $v$ is responsible for choosing the value of $x_v$, and the communication network is a hypergraph $mathcal{H}$ where the sets $V_k$ and $V_i$ constitute the hyperedges. We present inapproximability results for a wide range of structural assumptions; for example, even if $|V_i|$ and $|V_k|$ are bounded by some constants larger than 2, there is no local approximation scheme. To contrast the negative results, we present a local approximation algorithm which achieves good approximation ratios if we can bound the relative growth of the vertex neighbourhoods in $mathcal{H}$.
We present a novel bottom-up method for the synthesis of functional recursive programs. While bottom-up synthesis techniques can work better than top-down methods in certain settings, there is no prior technique for synthesizing recursive programs fr om logical specifications in a purely bottom-up fashion. The main challenge is that effective bottom-up methods need to execute sub-expressions of the code being synthesized, but it is impossible to execute a recursive subexpression of a program that has not been fully constructed yet. In this paper, we address this challenge using the concept of angelic semantics. Specifically, our method finds a program that satisfies the specification under angelic semantics (we refer to this as angelic synthesis), analyzes the assumptions made during its angelic execution, uses this analysis to strengthen the specification, and finally reattempts synthesis with the strengthened specification. Our proposed angelic synthesis algorithm is based on version space learning and therefore deals effectively with many incremental synthesis calls made during the overall algorithm. We have implemented this approach in a prototype called Burst and evaluate it on synthesis problems from prior work. Our experiments show that Burst is able to synthesize a solution to 95% of the benchmarks in our benchmark suite, outperforming prior work.
212 - Xavier Grehant , J.M. Dana 2008
Platform virtualization helps solving major grid computing challenges: share resource with flexible, user-controlled and custom execution environments and in the meanwhile, isolate failures and malicious code. Grid resource management tools will evol ve to embrace support for virtual resource. We present two open source projects that transparently supply virtual execution environments. Tycoon has been developed at HP Labs to optimise resource usage in creating an economy where users bid to access virtual machines and compete for CPU cycles. SmartDomains provides a peer-to-peer layer that automates virtual machines deployment using a description language and deployment engine from HP Labs. These projects demonstrate both client-server and peer-to-peer approaches to virtual resource management. The first case makes extensive use of virtual machines features for dynamic resource allocation. The second translates virtual machines capabilities into a sophisticated language where resource management components can be plugged in configurations and architectures defined at deployment time. We propose to share our experience at CERN openlab developing SmartDomains and deploying Tycoon to give an illustrative introduction to emerging research in virtual resource management.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا