ترغب بنشر مسار تعليمي؟ اضغط هنا

uFLIP: Understanding Flash IO Patterns

47   0   0.0 ( 0 )
 نشر من قبل Luc Bouganim
 تاريخ النشر 2009
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Does the advent of flash devices constitute a radical change for secondary storage? How should database systems adapt to this new form of secondary storage? Before we can answer these questions, we need to fully understand the performance characteristics of flash devices. More specifically, we want to establish what kind of IOs should be favored (or avoided) when designing algorithms and architectures for flash-based systems. In this paper, we focus on flash IO patterns, that capture relevant distribution of IOs in time and space, and our goal is to quantify their performance. We define uFLIP, a benchmark for measuring the response time of flash IO patterns. We also present a benchmarking methodology which takes into account the particular characteristics of flash devices. Finally, we present the results obtained by measuring eleven flash devices, and derive a set of design hints that should drive the development of flash-based systems on current devices.



قيم البحث

اقرأ أيضاً

Deep learning (DL) workloads are moving towards accelerators for faster processing and lower cost. Modern DL accelerators are good at handling the large-scale multiply-accumulate operations that dominate DL workloads; however, it is challenging to ma ke full use of the compute power of an accelerator since the data must be properly staged in a software-managed scratchpad memory. Failing to do so can result in significant performance loss. This paper proposes a systematic approach which leverages the polyhedral model to analyze all operators of a DL model together to minimize the number of memory accesses. Experiments show that our approach can substantially reduce the impact of memory accesses required by common neural-network models on a homegrown AWS machine-learning inference chip named Inferentia, which is available through Amazon EC2 Inf1 instances.
Serverless computing is increasingly popular because of the promise of lower cost and the convenience it provides to users who do not need to focus on server management. This has resulted in the availability of a number of proprietary and open-source serverless solutions. We seek to understand how the performance of serverless computing depends on a number of design issues using several popular open-source serverless platforms. We identify the idiosyncrasies affecting performance (throughput and latency) for different open-source serverless platforms. Further, we observe that just having either resource-based (CPU and memory) or workload-based (request per second (RPS) or concurrent requests) auto-scaling is inadequate to address the needs of the serverless platforms.
Hardware platforms in high performance computing are constantly getting more complex to handle even when considering multicore CPUs alone. Numerous features and configuration options in the hardware and the software environment that are relevant for performance are not even known to most application users or developers. Microbenchmarks, i.e., simple codes that fathom a particular aspect of the hardware, can help to shed light on such issues, but only if they are well understood and if the results can be reconciled with known facts or performance models. The insight gained from microbenchmarks may then be applied to real applications for performance analysis or optimization. In this paper we investigate two modern Intel x86 server CPU architectures in depth: Broadwell EP and Cascade Lake SP. We highlight relevant hardware configuration settings that can have a decisive impact on code performance and show how to properly measure on-chip and off-chip data transfer bandwidths. The new victim L3 cache of Cascade Lake and its advanced replacement policy receive due attention. Finally we use DGEMM, sparse matrix-vector multiplication, and the HPCG benchmark to make a connection to relevant application scenarios.
222 - M.C. Gonzalez , C.A. Hidalgo , 2008
Despite their importance for urban planning, traffic forecasting, and the spread of biological and mobile viruses, our understanding of the basic laws governing human motion remains limited thanks to the lack of tools to monitor the time resolved loc ation of individuals. Here we study the trajectory of 100,000 anonymized mobile phone users whose position is tracked for a six month period. We find that in contrast with the random trajectories predicted by the prevailing Levy flight and random walk models, human trajectories show a high degree of temporal and spatial regularity, each individual being characterized by a time independent characteristic length scale and a significant probability to return to a few highly frequented locations. After correcting for differences in travel distances and the inherent anisotropy of each trajectory, the individual travel patterns collapse into a single spatial probability distribution, indicating that despite the diversity of their travel history, humans follow simple reproducible patterns. This inherent similarity in travel patterns could impact all phenomena driven by human mobility, from epidemic prevention to emergency response, urban planning and agent based modeling.
Understanding quantitative relationships between urban elements is crucial for a wide range of applications. The observation at the macroscopic level demonstrates that the aggregated urban quantities (e.g., gross domestic product) scale systematicall y with population sizes across cities, also known as urban scaling laws. However, at the mesoscopic level, we lack an understanding of whether the simple scaling relationship holds within cities, which is a fundamental question regarding the spatial origin of scaling in urban systems. Here, by analyzing four extensive datasets covering millions of mobile phone users and urban facilities, we investigate the scaling phenomena within cities. We find that the mesoscopic infrastructure volume and socioeconomic activity scale sub- and super-linearly with the active population, respectively. For a same scaling phenomenon, however, the exponents vary in cities of similar population sizes. To explain these empirical observations, we propose a conceptual framework by considering the heterogeneous distributions of population and facilities, and the spatial interactions between them. Analytical and numerical results suggest that, despite the large number of complexities that influence urban activities, the simple interaction rules can effectively explain the observed regularity and heterogeneity in scaling behaviors within cities.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا