ترغب بنشر مسار تعليمي؟ اضغط هنا

IoTDataBench: Extending TPCx-IoT for Compression and Scalability

317   0   0.0 ( 0 )
 نشر من قبل Yuqing Zhu
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

We present a record-breaking result and lessons learned in practicing TPCx-IoT benchmarking for a real-world use case. We find that more system characteristics need to be benchmarked for its application to real-world use cases. We introduce an extension to the TPCx-IoT benchmark, covering fundamental requirements of time-series data management for IoT infrastructure. We characterize them as data compression and system scalability. To evaluate these two important features of IoT databases, we propose IoTDataBench and update four aspects of TPCx-IoT, i.e., data generation, workloads, metrics and test procedures. Preliminary evaluation results show systems that fail to effectively compress data or flexibly scale can negatively affect the redesigned metrics, while systems with high compression ratios and linear scalability are rewarded in the final metrics. Such systems have the ability to scale up computing resources on demand and can thus save dollar costs.


قيم البحث

اقرأ أيضاً

We show how to quantify scalability with the Universal Scalability Law (USL) by applying it to performance measurements of memcached, J2EE, and Weblogic on multi-core platforms. Since commercial multicores are essentially black-boxes, the accessible performance gains are primarily available at the application level. We also demonstrate how our methodology can identify the most significant performance tuning opportunities to optimize application scalability, as well as providing an easy means for exploring other aspects of the multi-core system design space.
Eliciting scalability requirements during agile software development is complicated and poorly described in previous research. This article presents a lightweight artifact for eliciting scalability requirements during agile software development: the ScrumScale model. The ScrumScale model is a simple spreadsheet. The scalability concepts underlying the ScrumScale model are clarified in this design science research, which also utilizes coordination theory. This paper describes the open banking case study, where a legacy banking system becomes open. This challenges the scalability of this legacy system. The first step in understanding this challenge is to elicit the new scalability requirements. In the open banking case study, key stakeholders from TietoEVRY spent 55 hours eliciting TietoEVRYs open banking projects scalability requirements. According to TietoEVRY, the ScrumScale model provided a systematic way of producing scalability requirements. For TietoEVRY, the scalability concepts behind the ScrumScale model also offered significant advantages in dialogues with other stakeholders.
Serverless computing has rapidly grown following the launch of Amazons Lambda platform. Function-as-a-Service (FaaS) a key enabler of serverless computing allows an application to be decomposed into simple, standalone functions that are executed on a FaaS platform. The FaaS platform is responsible for deploying and facilitating resources to the functions. Several of todays cloud applications spread over heterogeneous connected computing resources and are highly dynamic in their structure and resource requirements. However, FaaS platforms are limited to homogeneous clusters and homogeneous functions and do not account for the data access behavior of functions before scheduling. We introduce an extension of FaaS to heterogeneous clusters and to support heterogeneous functions through a network of distributed heterogeneous target platforms called Function Delivery Network (FDN). A target platform is a combination of a cluster of homogeneous nodes and a FaaS platform on top of it. FDN provides Function-Delivery-as-a-Service (FDaaS), delivering the function to the right target platform. We showcase the opportunities such as varied target platforms characteristics, possibility of collaborative execution between multiple target platforms, and localization of data that the FDN offers in fulfilling two objectives: Service Level Objective (SLO) requirements and energy efficiency when scheduling functions by evaluating over five distributed target platforms using the FDNInspector, a tool developed by us for benchmarking distributed target platforms. Scheduling functions on an edge target platform in our evaluation reduced the overall energy consumption by 17x without violating the SLO requirements in comparison to scheduling on a high-end target platform.
Clock configuration within constrained general-purpose microcontrollers takes a key role in tuning performance, power consumption, and timing accuracy of applications in the Internet of Things (IoT). Subsystems governing the underlying clock tree mus t nonetheless cope with a huge parameter space, complex dependencies, and dynamic constraints. Manufacturers expose the underlying functions in very diverse ways, which leads to specialized implementations of low portability. In this paper, we propose FlexClock, an approach for generic online clock reconfiguration on constrained IoT devices. We argue that (costly) generic clock configuration of general purpose computers and powerful mobile devices need to slim down to the lower end of the device spectrum. In search of a generalized solution, we identify recurring patterns and building blocks, which we use to decompose clock trees into independent, reusable components. With this segmentation we derive an abstract representation of vendor-specific clock trees, which then can be dynamically reconfigured at runtime. We evaluate our implementation on common hardware. Our measurements demonstrate how FlexClock significantly improves peak power consumption and energy efficiency by enabling dynamic voltage and frequency scaling (DVFS) in a platform-agnostic way.
We present a joint source-channel multiple description (JSC-MD) framework for resource-constrained network communications (e.g., sensor networks), in which one or many deprived encoders communicate a Markov source against bit errors and erasure error s to many heterogeneous decoders, some powerful and some deprived. To keep the encoder complexity at minimum, the source is coded into K descriptions by a simple multiple description quantizer (MDQ) with neither entropy nor channel coding. The code diversity of MDQ and the path diversity of the network are exploited by decoders to correct transmission errors and improve coding efficiency. A key design objective is resource scalability: powerful nodes in the network can perform JSC-MD distributed estimation/decoding under the criteria of maximum a posteriori probability (MAP) or minimum mean-square error (MMSE), while primitive nodes resort to simpler MD decoding, all working with the same MDQ code. The application of JSC-MD to distributed estimation of hidden Markov models in a sensor network is demonstrated. The proposed JSC-MD MAP estimator is an algorithm of the longest path in a weighted directed acyclic graph, while the JSC-MD MMSE decoder is an extension of the well-known forward-backward algorithm to multiple descriptions. Both algorithms simultaneously exploit the source memory, the redundancy of the fixed-rate MDQ, and the inter-description correlations. They outperform the existing hard-decision MDQ decoders by large margins (up to 8dB). For Gaussian Markov sources, the complexity of JSC-MD distributed MAP sequence estimation can be made as low as that of typical single description Viterbi-type algorithms.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا