Do you want to publish a course? Click here

Deterministic, Stash-Free Write-Only ORAM

125   0   0.0 ( 0 )
 Added by Daniel Roche
 Publication date 2017
and research's language is English




Ask ChatGPT about the research

Write-Only Oblivious RAM (WoORAM) protocols provide privacy by encrypting the contents of data and also hiding the pattern of write operations over that data. WoORAMs provide better privacy than plain encryption and better performance than more general ORAM schemes (which hide both writing and reading access patterns), and the write-oblivious setting has been applied to important applications of cloud storage synchronization and encrypted hidden volumes. In this paper, we introduce an entirely new technique for Write-Only ORAM, called DetWoORAM. Unlike previous solutions, DetWoORAM uses a deterministic, sequential writing pattern without the need for any stashing of blocks in local state when writes fail. Our protocol, while conceptually simple, provides substantial improvement over prior solutions, both asymptotically and experimentally. In particular, under typical settings the DetWoORAM writes only 2 blocks (sequentially) to backend memory for each block written to the device, which is optimal. We have implemented our solution using the BUSE (block device in user-space) module and tested DetWoORAM against both an encryption only baseline of dm-crypt and prior, randomized WoORAM solutions, measuring only a 3x-14x slowdown compared to an encryption-only baseline and around 6x-19x speedup compared to prior work.

rate research

Read More

In the cloud computing era, data privacy is a critical concern. Memory accesses patterns can leak private information. This data leak is particularly challenging for deep learning recommendation models, where data associated with a user is used to train a model. Recommendation models use embedding tables to map categorical data (embedding table indices) to large vector space, which is easier for recommendation systems to learn. Oblivious RAM (ORAM) and its enhancements are proposed solutions to prevent memory access patterns from leaking information. ORAM solutions hide access patterns by fetching multiple data blocks per each demand fetch and then shuffling the location of blocks after each access. In this paper, we propose a new PathORAM architecture designed to protect user input privacy when training recommendation models. Look Ahead ORAM exploits the fact that during training, embedding table indices that are going to be accessed in a future batch are known beforehand. Look Ahead ORAM preprocesses future training samples to identify indices that will co-occur and groups these accesses into a large superblock. Look Ahead ORAM performs the same-path assignment by grouping multiple data blocks into superblocks. Accessing a superblock will require fewer fetched data blocks than accessing all data blocks without grouping them as superblocks. Effectively, Look Ahead ORAM reduces the number of reads/writes per access. Look Ahead ORAM also introduces a fat-tree structure for PathORAM, i.e. a tree with variable bucket size. Look Ahead ORAM achieves 2x speedup compared to PathORAM and reduces the bandwidth requirement by 3.15x while providing the same security as PathORAM.
Transparent authentication (TA) schemes are those in which a user is authenticated by a verifier without requiring explicit user interaction. By doing so, those schemes promise high usability and security simultaneously. The majority of TA implementations rely on the received signal strength as an indicator for the proximity of a user device (prover). However, such implicit proximity verification is not secure against an adversary who can relay messages over a larger distance. In this paper, we propose a novel approach for thwarting relay attacks in TA schemes: the prover permits access to authentication credentials only if it can confirm that it is near the verifier. We present STASH, a system for relay-resilient transparent authentication in which the prover does proximity verification by comparing its approach trajectory towards the intended verifier with known authorized reference trajectories. Trajectories are measured using low-cost sensors commonly available on personal devices. We demonstrate the security of STASH against a class of adversaries and its ease-of-use by analyzing empirical data, collected using a STASH prototype. STASH is efficient and can be easily integrated to complement existing TA schemes.
Access to network traffic records is an integral part of recognizing and addressing network security breaches. Even with the increasing sophistication of network attacks, basic network events such as connections between two IP addresses play an important role in any network defense. Given the duration of current attacks, long-term data archival is critical but typically very little of the data is ever accessed. Previous work has provided tools and identified the need to trace connections. However, traditional databases raise performance concerns as they are optimized for querying rather than ingestion. The study of write-optimized data structures (WODS) is a new and growing field that provides a novel approach to traditional storage structures (e.g., B-trees). WODS trade minor degradations in query performance for significant gains in the ability to quickly insert more data elements, typically on the order of 10 to 100 times more inserts per second. These efficient, out-of-memory data structures can play a critical role in enabling robust, long-term tracking of network events. In this paper, we present TWIAD, the Write-optimized IP Address Database. TWIAD uses a write-optimized B-tree known as a B {epsilon} tree to track all IP address connections in a network traffic stream. Our initial implementation focuses on utilizing lower cost hardware, demonstrating that basic long-term tracking can be done without advanced equipment. We tested TWIAD on a modest desktop system and showed a sustained ingestion rate of about 20,000 inserts per second.
92 - Mahesh Arumugam 2008
Several self-stabilizing time division multiple access (TDMA) algorithms are proposed for sensor networks. In addition to providing a collision-free communication service, such algorithms enable the transformation of programs written in abstract models considered in distributed computing literature into a model consistent with sensor networks, i.e., write all with collision (WAC) model. Existing TDMA slot assignment algorithms have one or more of the following properties: (i) compute slots using a randomized algorithm, (ii) assume that the topology is known upfront, and/or (iii) assign slots sequentially. If these algorithms are used to transform abstract programs into programs in WAC model then the transformed programs are probabilistically correct, do not allow the addition of new nodes, and/or converge in a sequential fashion. In this paper, we propose a self-stabilizing deterministic TDMA algorithm where a sensor is aware of only its neighbors. We show that the slots are assigned to the sensors in a concurrent fashion and starting from arbitrary initial states, the algorithm converges to states where collision-free communication among the sensors is restored. Moreover, this algorithm facilitates the transformation of abstract programs into programs in WAC model that are deterministically correct.
Online users generate tremendous amounts of textual information by participating in different activities, such as writing reviews and sharing tweets. This textual data provides opportunities for researchers and business partners to study and understand individuals. However, this user-generated textual data not only can reveal the identity of the user but also may contain individuals private information (e.g., age, location, gender). Hence, you are what you write as the saying goes. Publishing the textual data thus compromises the privacy of individuals who provided it. The need arises for data publishers to protect peoples privacy by anonymizing the data before publishing it. It is challenging to design effective anonymization techniques for textual information which minimizes the chances of re-identification and does not contain users sensitive information (high privacy) while retaining the semantic meaning of the data for given tasks (high utility). In this paper, we study this problem and propose a novel double privacy preserving text representation learning framework, DPText, which learns a textual representation that (1) is differentially private, (2) does not contain private information and (3) retains high utility for the given task. Evaluating on two natural language processing tasks, i.e., sentiment analysis and part of speech tagging, we show the effectiveness of this approach in terms of preserving both privacy and utility.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا