No Arabic abstract
Data integrity is important for non-volatile memory (NVM) systems that maintain data even without power. The data integrity in NVM is possibly compromised by integrity attacks, which can be defended against by integrity verification via integrity trees. After NVM system failures and reboots, the integrity tree root is responsible for providing a trusted execution environment. However, the root often becomes a performance bottleneck, since updating the root requires high latency on the write critical path to propagate the modifications from leaf nodes to the root. The root and leaf nodes have to ensure the crash consistency between each other to avoid any update failures that potentially result in misreporting the attacks after system reboots. In this paper, we propose an efficient and low-latency scheme, called SCUE, to directly update the root on the SGX integrity tree (SIT) by overlooking the updates upon the intermediate tree nodes. The idea behind SCUE explores and exploits the observation that only the persistent leaf nodes and root are useful to ensure the integrity after system failures and reboots, due to the loss of the cached intermediate tree nodes. To achieve the crash consistency between root and leaf nodes, we accurately predict the updates upon the root and pre-update the root before the leaf nodes are modified. Moreover, the SIT root is difficult to be reconstructed from the leaf nodes since updating one tree node needs its parent node as input. We use a counter-summing approach to reconstructing the SIT from leaf nodes. Our evaluation results show that compared with the state-of-the-art integrity tree update schemes, our SCUE scheme delivers high performance while ensuring the system integrity.
In this paper, we propose MgX, a near-zero overhead memory protection scheme for hardware accelerators. MgX minimizes the performance overhead of off-chip memory encryption and integrity verification by exploiting the application-specific aspect of accelerators. Accelerators tend to explicitly manage data movement between on-chip and off-chip memory, typically at an object granularity that is much larger than cache lines. Exploiting these accelerator-specific characteristics, MgX generates version numbers used in memory encryption and integrity verification only using on-chip state without storing them in memory, and also customizes the granularity of the memory protection to match the granularity used by the accelerator. To demonstrate the applicability of MgX, we present an in-depth study of MgX for deep neural network (DNN) and also describe implementations for H.264 video decoding and genome alignment. Experimental results show that applying MgX has less than 1% performance overhead for both DNN inference and training on state-of-the-art DNN architectures.
Energy harvesting is an attractive way to power future IoT devices since it can eliminate the need for battery or power cables. However, harvested energy is intrinsically unstable. While FPGAs have been widely adopted in various embedded systems, it is hard to survive unstable power since all the memory components in FPGA are based on volatile SRAMs. The emerging non-volatile memory based FPGAs provide promising potentials to keep configuration data on the chip during power outages. Few works have considered implementing efficient runtime intermediate data checkpoint on non-volatile FPGAs. To realize accumulative computation under intermittent power on FPGA, this paper proposes a low-cost design framework, Data-Flow-Tracking FPGA (DFT-FPGA), which utilizes binary counters to track intermediate data flow. Instead of keeping all on-chip intermediate data, DFT-FPGA only targets on necessary data that is labeled by off-line analysis and identified by an online tracking system. The evaluation shows that compared with state-of-the-art techniques, DFT-FPGA can realize accumulative computing with less off-line workload and significantly reduce online roll-back time and resource utilization.
DNA sequencing is the physical/biochemical process of identifying the location of the four bases (Adenine, Guanine, Cytosine, Thymine) in a DNA strand. As semiconductor technology revolutionized computing, modern DNA sequencing technology (termed Next Generation Sequencing, NGS)revolutionized genomic research. As a result, modern NGS platforms can sequence hundreds of millions of short DNA fragments in parallel. The sequenced DNA fragments, representing the output of NGS platforms, are termed reads. Besides genomic variations, NGS imperfections induce noise in reads. Mapping each read to (the most similar portion of) a reference genome of the same species, i.e., read mapping, is a common critical first step in a diverse set of emerging bioinformatics applications. Mapping represents a search-heavy memory-intensive similarity matching problem, therefore, can greatly benefit from near-memory processing. Intuition suggests using fast associative search enabled by Ternary Content Addressable Memory (TCAM) by construction. However, the excessive energy consumption and lack of support for similarity matching (under NGS and genomic variation induced noise) renders direct application of TCAM infeasible, irrespective of volatility, where only non-volatile TCAM can accommodate the large memory footprint in an area-efficient way. This paper introduces GeNVoM, a scalable, energy-efficient and high-throughput solution. Instead of optimizing an algorithm developed for general-purpose computers or GPUs, GeNVoM rethinks the algorithm and non-volatile TCAM-based accelerator design together from the ground up. Thereby GeNVoM can improve the throughput by up to 113.5 times (3.6); the energy consumption, by up to 210.9 times (1.36), when compared to a GPU (accelerator) baseline, which represents one of the highest-throughput implementations known.
Attacks targeting software on embedded systems are becoming increasingly prevalent. Remote attestation is a mechanism that allows establishing trust in embedded devices. However, existing attestation schemes are either static and cannot detect control-flow attacks, or require instrumentation of software incurring high performance overheads. To overcome these limitations, we present LO-FAT, the first practical hardware-based approach to control-flow attestation. By leveraging existing processor hardware features and commonly-used IP blocks, our approach enables efficient control-flow attestation without requiring software instrumentation. We show that our proof-of-concept implementation based on a RISC-V SoC incurs no processor stalls and requires reasonable area overhead.
Fast, byte-addressable non-volatile memory (NVM) embraces both near-DRAM latency and disk-like persistence, which has generated considerable interests to revolutionize system software stack and programming models. However, it is less understood how NVM can be combined with managed runtime like Java virtual machine (JVM) to ease persistence management. This paper proposes Espresso, a holistic extension to Java and its runtime, to enable Java programmers to exploit NVM for persistence management with high performance. Espresso first provides a general persistent heap design called Persistent Java Heap (PJH) to manage persistent data as normal Java objects. The heap is then strengthened with a recoverable mechanism to provide crash consistency for heap metadata. It then provides a new abstraction called Persistent Java Object (PJO) to provide an easy-to-use but safe persistent programming model for programmers to persist application data. The evaluation confirms that Espresso significantly outperforms state-of-art NVM support for Java (i.e., JPA and PCJ) while being compatible to existing data structures in Java programs.