ترغب بنشر مسار تعليمي؟ اضغط هنا

An analytical framework for data stream mining techniques based on challenges and requirements

148   0   0.0 ( 0 )
 نشر من قبل Reza Keyvan
 تاريخ النشر 2011
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

A growing number of applications that generate massive streams of data need intelligent data processing and online analysis. Real-time surveillance systems, telecommunication systems, sensor networks and other dynamic environments are such examples. The imminent need for turning such data into useful information and knowledge augments the development of systems, algorithms and frameworks that address streaming challenges. The storage, querying and mining of such data sets are highly computationally challenging tasks. Mining data streams is concerned with extracting knowledge structures represented in models and patterns in non stopping streams of information. Generally, two main challenges are designing fast mining methods for data streams and need to promptly detect changing concepts and data distribution because of highly dynamic nature of data streams. The goal of this article is to analyze and classify the application of diverse data mining techniques in different challenges of data stream mining. In this paper, we present the theoretical foundations of data stream analysis and propose an analytical framework for data stream mining techniques.



قيم البحث

اقرأ أيضاً

In time-domain astronomy, we need to use the relational database to manage star catalog data. With the development of sky survey technology, the size of star catalog data is larger, and the speed of data generation is faster. So, in this paper, we ma ke a systematic and comprehensive introduction to process the data in time-domain astronomy, and valuable research questions are detailed. Then, we list candidate systems usually used in astronomy and point out the advantages and disadvantages of these systems. In addition, we present the key techniques needed to deal with astronomical data. Finally, we summarize the challenges faced by the design of our database prototype.
In this paper, we propose a plugin-based framework for RDF stream processing named PRSP. Within this framework, we can employ SPARQL query engines to process C-SPARQL queries with maintaining the high performance of those engines in a simple way. Tak ing advantage of PRSP, we can process large-scale RDF streams in a distributed context via distributed SPARQL engines. Besides, we can evaluate the performance and correctness of existing SPARQL query engines in handling RDF streams in a united way, which amends the evaluation of them ranging from static RDF (i.e., RDF graph) to dynamic RDF (i.e., RDF stream). Finally, within PRSP, we experimently evaluate the correctness and the performance on YABench. The experiments show that PRSP can still maintain the high performance of those engines in RDF stream processing although there are some slight differences among them.
We propose hMDAP, a hybrid framework for large-scale data analytical processing on Spark, to support multi-paradigm process (incl. OLAP, machine learning, and graph analysis etc.) in distributed environments. The framework features a three-layer data process module and a business process module which controls the former. We will demonstrate the strength of hMDAP by using traffic scenarios in a real world.
162 - Hadj Mahboubi 2008
With the multiplication of XML data sources, many XML data warehouse models have been proposed to handle data heterogeneity and complexity in a way relational data warehouses fail to achieve. However, XML-native database systems currently suffer from limited performances, both in terms of manageable data volume and response time. Fragmentation helps address both these issues. Derived horizontal fragmentation is typically used in relational data warehouses and can definitely be adapted to the XML context. However, the number of fragments produced by classical algorithms is difficult to control. In this paper, we propose the use of a k-means-based fragmentation approach that allows to master the number of fragments through its $k$ parameter. We experimentally compare its efficiency to classical derived horizontal fragmentation algorithms adapted to XML data warehouses and show its superiority.
Materialized views and indexes are physical structures for accelerating data access that are casually used in data warehouses. However, these data structures generate some maintenance overhead. They also share the same storage space. Most existing st udies about materialized view and index selection consider these structures separately. In this paper, we adopt the opposite stance and couple materialized view and index selection to take view-index interactions into account and achieve efficient storage space sharing. Candidate materialized views and indexes are selected through a data mining process. We also exploit cost models that evaluate the respective benefit of indexing and view materialization, and help select a relevant configuration of indexes and materialized views among the candidates. Experimental results show that our strategy performs better than an independent selection of materialized views and indexes.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا