ترغب بنشر مسار تعليمي؟ اضغط هنا

A Hybrid Cache Architecture for Meeting Per-Tenant Performance Goals in a Private Cloud

67   0   0.0 ( 0 )
 نشر من قبل Jinoh Kim
 تاريخ النشر 2019
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

The in-memory cache system is an important component in a cloud for the data access performance. As the tenants may have different performance goals for data access depending on the nature of their tasks, effectively managing the memory cache is a crucial concern in such a shared computing environment. Two extreme methods for managing the memory cache are unlimited sharing and complete isolation, both of which would be inefficient with the expensive storage complexity to meet the per-tenant performance requirement. In this paper, we present a new cache model that incorporates global caching (based on unlimited sharing) and static caching (offering complete isolation) for a private cloud, in which it is critical to offer the guaranteed performance while minimizing the operating cost. This paper also presents a cache insertion algorithm tailored to the proposed cache model. From an extensive set of experiments conducted on the simulation and emulation settings, the results confirm the validity of the presented cache architecture and insertion algorithm showing the optimized use of the cache space for meeting the per-tenant performance requirement.



قيم البحث

اقرأ أيضاً

Container technologies have been evolving rapidly in the cloud-native era. Kubernetes, as a production-grade container orchestration platform, has been proven to be successful at managing containerized applications in on-premises datacenters. However , Kubernetes lacks sufficient multi-tenant supports by design, meaning in cloud environments, dedicated clusters are required to serve multiple users, i.e., tenants. This limitation significantly diminishes the benefits of cloud computing, and makes it difficult to build multi-tenant software as a service (SaaS) products using Kubernetes. In this paper, we propose Virtual-Cluster, a new multi-tenant framework that extends Kubernetes with adequate multi-tenant supports. Basically, VirtualCluster provides both control plane and data plane isolations while sharing the underlying compute resources among tenants. The new framework preserves the API compatibility by avoiding modifying the Kubernetes core components. Hence, it can be easily integrated with existing Kubernetes use cases. Our experimental results show that the overheads introduced by VirtualCluster, in terms of latency and throughput, is moderate.
Systems for processing big data---e.g., Hadoop, Spark, and massively parallel databases---need to run workloads on behalf of multiple tenants simultaneously. The abundant disk-based storage in these systems is usually complemented by a smaller, but m uch faster, {em cache}. Cache is a precious resource: Tenants who get to use cache can see two orders of magnitude performance improvement. Cache is also a limited and hence shared resource: Unlike a resource like a CPU core which can be used by only one tenant at a time, a cached data item can be accessed by multiple tenants at the same time. Cache, therefore, has to be shared by a multi-tenancy-aware policy across tenants, each having a unique set of priorities and workload characteristics. In this paper, we develop cache allocation strategies that speed up the overall workload while being {em fair} to each tenant. We build a novel fairness model targeted at the shared resource setting that incorporates not only the more standard concepts of Pareto-efficiency and sharing incentive, but also define envy freeness via the notion of {em core} from cooperative game theory. Our cache management platform, ROBUS, uses randomization over small time batches, and we develop a proportionally fair allocation mechanism that satisfies the core property in expectation. We show that this algorithm and related fair algorithms can be approximated to arbitrary precision in polynomial time. We evaluate these algorithms on a ROBUS prototype implemented on Spark with RDD store used as cache. Our evaluation on a synthetically generated industry-standard workload shows that our algorithms provide a speedup close to performance optimal algorithms while guaranteeing fairness across tenants.
112 - Jinhwan Choi , Yu Gu , Jinoh Kim 2019
Caches are an important component of modern computing systems given their significant impact on performance. In particular, caches play a key role in the cloud due to the nature of large-scale, data-intensive processing. One of the key challenges for the cloud providers is how to share the caching capacity among tenants, under the circumstance that each often requires a different degree of quality of service (QoS) with respect to data access performance. The invariant is that the individual tenants QoS requirements should be satisfied while the cache usage is optimized in a system-wide manner. In this paper, we introduce a learning-based approach for dynamic cache management in a cloud, which is based on the estimation of data access pattern of a tenant and the prediction of cache performance for the access pattern in question. We consider a variety of probability distributions to estimate the data access pattern, and examine a set of learning-based regression techniques to predict the cache hit rate for the access pattern. The predicted cache hit rate is then used to make a decision whether reallocating cache space is needed to meet the QoS requirement for the tenant. Our experimental results with an extensive set of synthetic traces and the YCSB benchmark show that the proposed method consistently optimizes the cache space while satisfying the QoS requirement.
With many large science equipment constructing and putting into use, astronomy has stepped into the big data era. The new method and infrastructure of big data processing has become a new requirement of many astronomers. Cloud computing, Map/Reduce, Hadoop, Spark, etc. many new technology has sprung up in recent years. Comparing to the high performance computing(HPC), Data is the center of these new technology. So, a new computing architecture infrastructure is necessary, which can be shared by both HPC and big data processing. Based on Astronomy Cloud project of Chinese Virtual Observatory (China-VO), we have made much efforts to optimize the designation of the hybrid computing platform. which include the hardware architecture, cluster management, Job and Resource scheduling.
A Hybrid cloud is an integration of resources between private and public clouds. It enables users to horizontally scale their on-premises infrastructure up to public clouds in order to improve performance and cut up-front investment cost. This model of applications deployment is called cloud bursting that allows data-intensive applications especially distributed database systems to have the benefit of both private and public clouds. In this work, we present an automated implementation of a hybrid cloud using (i) a robust and zero-cost Linux-based VPN to make a secure connection between private and public clouds, and (ii) Terraform as a software tool to deploy infrastructure resources based on the requirements of hybrid cloud. We also explore performance evaluation of cloud bursting for six modern and distributed database systems on the hybrid cloud spanning over local OpenStack and Microsoft Azure. Our results reveal that MongoDB and MySQL Cluster work efficient in terms of throughput and operations latency if they burst into a public cloud to supply their resources. In contrast, the performance of Cassandra, Riak, Redis, and Couchdb reduces if they significantly leverage their required resources via cloud bursting.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا