ترغب بنشر مسار تعليمي؟ اضغط هنا

Cache policies for cloud-based systems: To keep or not to keep

116   0   0.0 ( 0 )
 نشر من قبل Christoph Neumann
 تاريخ النشر 2013
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

In this paper, we study cache policies for cloud-based caching. Cloud-based caching uses cloud storage services such as Amazon S3 as a cache for data items that would have been recomputed otherwise. Cloud-based caching departs from classical caching: cloud resources are potentially infinite and only paid when used, while classical caching relies on a fixed storage capacity and its main monetary cost comes from the initial investment. To deal with this new context, we design and evaluate a new caching policy that minimizes the overall cost of a cloud-based system. The policy takes into account the frequency of consumption of an item and the cloud cost model. We show that this policy is easier to operate, that it scales with the demand and that it outperforms classical policies managing a fixed capacity.



قيم البحث

اقرأ أيضاً

For models of concurrent and distributed systems, it is important and also challenging to establish correctness in terms of safety and/or liveness properties. Theories of distributed systems consider equivalences fundamental, since they (1) preserve desirable correctness characteristics and (2) often allow for component substitution making compositional reasoning feasible. Modeling distributed systems often requires abstraction utilizing nondeterminism which induces unintended behaviors in terms of infinite executions with one nondeterministic choice being recurrently resolved, each time neglecting a single alternative. These situations are considered unrealistic or highly improbable. Fairness assumptions are commonly used to filter system behaviors, thereby distinguishing between realistic and unrealistic executions. This allows for key arguments in correctness proofs of distributed systems, which would not be possible otherwise. Our contribution is an equivalence spectrum in which fairness assumptions are preserved. The identified equivalences allow for (compositional) reasoning about correctness incorporating fairness assumptions.
The presence of objects that are confusingly similar to the tracked target, poses a fundamental challenge in appearance-based visual tracking. Such distractor objects are easily misclassified as the target itself, leading to eventual tracking failure . While most methods strive to suppress distractors through more powerful appearance models, we take an alternative approach. We propose to keep track of distractor objects in order to continue tracking the target. To this end, we introduce a learned association network, allowing us to propagate the identities of all target candidates from frame-to-frame. To tackle the problem of lacking ground-truth correspondences between distractor objects in visual tracking, we propose a training strategy that combines partial annotations with self-supervision. We conduct comprehensive experimental validation and analysis of our approach on several challenging datasets. Our tracker sets a new state-of-the-art on six benchmarks, achieving an AUC score of 67.1% on LaSOT and a +5.8% absolute gain on the OxUvA long-term dataset.
Performance of neural models for named entity recognition degrades over time, becoming stale. This degradation is due to temporal drift, the change in our target variables statistical properties over time. This issue is especially problematic for soc ial media data, where topics change rapidly. In order to mitigate the problem, data annotation and retraining of models is common. Despite its usefulness, this process is expensive and time-consuming, which motivates new research on efficient model updating. In this paper, we propose an intuitive approach to measure the potential trendiness of tweets and use this metric to select the most informative instances to use for training. We conduct experiments on three state-of-the-art models on the Temporal Twitter Dataset. Our approach shows larger increases in prediction accuracy with less training data than the alternatives, making it an attractive, practical solution.
Internet of Things (IoT) is an Internet-based environment of connected devices and applications. IoT creates an environment where physical devices and sensors are flawlessly combined into information nodes to deliver innovative and smart services for human-being to make their life easier and more efficient. The main objective of the IoT devices-network is to generate data, which are converted into useful information by the data analysis process, it also provides useful resources to the end users. IoT resource management is a key challenge to ensure the quality of end user experience. Many IoT smart devices and technologies like sensors, actuators, RFID, UMTS, 3G, and GSM etc. are used to develop IoT networks. Cloud Computing plays an important role in these networks deployment by providing physical resources as virtualized resources consist of memory, computation power, network bandwidth, virtualized system and device drivers in secure and pay as per use basis. One of the major concerns of Cloud-based IoT environment is resource management, which ensures efficient resource utilization, load balancing, reduce SLA violation, and improve the system performance by reducing operational cost and energy consumption. Many researchers have been proposed IoT based resource management techniques. The focus of this paper is to investigate these proposed resource allocation techniques and finds which parameters must be considered for improvement in resource allocation for IoT networks. Further, this paper also uncovered challenges and issues of Cloud-based resource allocation for IoT environment.
Training sparse networks to converge to the same performance as dense neural architectures has proven to be elusive. Recent work suggests that initialization is the key. However, while this direction of research has had some success, focusing on init ialization alone appears to be inadequate. In this paper, we take a broader view of training sparse networks and consider the role of regularization, optimization, and architecture choices on sparse models. We propose a simple experimental framework, Same Capacity Sparse vs Dense Comparison (SC-SDC), that allows for a fair comparison of sparse and dense networks. Furthermore, we propose a new measure of gradient flow, Effective Gradient Flow (EGF), that better correlates to performance in sparse networks. Using top-line metrics, SC-SDC and EGF, we show that default choices of optimizers, activation functions and regularizers used for dense networks can disadvantage sparse networks. Based upon these findings, we show that gradient flow in sparse networks can be improved by reconsidering aspects of the architecture design and the training regime. Our work suggests that initialization is only one piece of the puzzle and taking a wider view of tailoring optimization to sparse networks yields promising results.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا