No Arabic abstract
There is an explosive growth in the size of the input and/or intermediate data used and generated by modern and emerging applications. Unfortunately, modern computing systems are not capable of handling large amounts of data efficiently. Major concepts and components (e.g., the virtual memory system) and predominant execution models (e.g., the processor-centric execution model) used in almost all computing systems are designed without having modern applications overwhelming data demand in mind. As a result, accessing, moving, and processing large amounts of data faces important challenges in todays systems, making data a first-class concern and a prime performance and energy bottleneck in such systems. This thesis studies the root cause of inefficiency in modern computing systems when handling modern applications data demand, and aims to fundamentally address such inefficiencies, with a focus on two directions. First, we design SIMDRAM, an end-to-end processing-using-DRAM framework that aids the widespread adoption of processing-using-DRAM, a data-centric computation paradigm that improves the overall performance and efficiency of the system when computing large amounts of data by minimizing the cost of data movement and enabling computation where the data resides. Second, we introduce the Virtual Block Interface (VBI), a novel virtual memory framework that 1) eliminates the inefficiencies of the conventional virtual memory frameworks when handling the high memory demand in modern applications, and 2) is built from the ground up to understand, convey, and exploit data properties, to create opportunities for performance and efficiency improvements.
We aim to implement a Big Data/Extreme Computing (BDEC) capable system infrastructure as we head towards the era of Exascale computing - termed SAGE (Percipient StorAGe for Exascale Data Centric Computing). The SAGE system will be capable of storing and processing immense volumes of data at the Exascale regime, and provide the capability for Exascale class applications to use such a storage infrastructure. SAGE addresses the increasing overlaps between Big Data Analysis and HPC in an era of next-generation data centric computing that has developed due to the proliferation of massive data sources, such as large, dispersed scientific instruments and sensors, whose data needs to be processed, analyzed and integrated into simulations to derive scientific and innovative insights. Indeed, Exascale I/O, as a problem that has not been sufficiently dealt with for simulation codes, is appropriately addressed by the SAGE platform. The objective of this paper is to discuss the software architecture of the SAGE system and look at early results we have obtained employing some of its key methodologies, as the system continues to evolve.
Next Generation Sequencing (NGS) technology has resulted in massive amounts of proteomics and genomics data. This data is of no use if it is not properly analyzed. ETL (Extraction, Transformation, Loading) is an important step in designing data analytics applications. ETL requires proper understanding of features of data. Data format plays a key role in understanding of data, representation of data, space required to store data, data I/O during processing of data, intermediate results of processing, in-memory analysis of data and overall time required to process data. Different data mining and machine learning algorithms require input data in specific types and formats. This paper explores the data formats used by different tools and algorithms and also presents modern data formats that are used on Big Data Platform. It will help researchers and developers in choosing appropriate data format to be used for a particular tool or algorithm.
An emerging class of data-intensive applications involve the geographically dispersed extraction of complex scientific information from very large collections of measured or computed data. Such applications arise, for example, in experimental physics, where the data in question is generated by accelerators, and in simulation science, where the data is generated by supercomputers. So-called Data Grids provide essential infrastructure for such applications, much as the Internet provides essential services for applications such as e-mail and the Web. We describe here two services that we believe are fundamental to any Data Grid: reliable, high-speed transporet and replica management. Our high-speed transport service, GridFTP, extends the popular FTP protocol with new features required for Data Grid applciations, such as striping and partial file access. Our replica management service integrates a replica catalog with GridFTP transfers to provide for the creation, registration, location, and management of dataset replicas. We present the design of both services and also preliminary performance results. Our implementations exploit security and other services provided by the Globus Toolkit.
Reliability is a fundamental requirement in any microprocessor to guarantee correct execution over its lifetime. The design rules related to reliability depend on the process technology being used and the expected operating conditions of the device. To meet reliability requirements, advanced process technologies (28 nm and below) impose highly challenging design rules. Such design-for-reliability rules have become a major burden on the flow of VLSI implementation because of the severe physical constraints they impose. This paper focuses on electromigration (EM), which is one of the major critical factors affecting semiconductor reliability. EM is the aging process of on-die wires and vias and is induced by excessive current flow that can damage wires and may also significantly impact the integrated-circuit clock frequency. EM exerts a comprehensive global effect on devices because it impacts wires that may reside inside the standard or custom logical cells, between logical cells, inside memory elements, and within wires that interconnect functional blocks. The design-implementation flow (synthesis and place-and-route) currently detects violations of EM-reliability rules and attempts to solve them. In contrast, this paper proposes a new approach to enhance these flows by using EM-aware architecture. Our results show that the proposed solution can relax EM design efforts in microprocessors and more than double microprocessor lifetime. This work demonstrates this proposed approach for modern microprocessors, although the principals and ideas can be adapted to other cases as well.
The Payload Data Handling System (PDHS) of Gaia is a technological challenge, since it will have to process a huge amount of data with limited resources. Its main tasks include the optimal codification of science data, its packetisation and its compression, before being stored on-board ready to be transmitted. Here we describe a set of proposals for its design, as well as some simulators developed to optimise and test these proposals.