Do you want to publish a course? Click here

A Parallel Random Forest Algorithm for Big Data in a Spark Cloud Computing Environment

302   0   0.0 ( 0 )
 Added by Jianguo Chen
 Publication date 2018
and research's language is English




Ask ChatGPT about the research

With the emergence of the big data age, the issue of how to obtain valuable knowledge from a dataset efficiently and accurately has attracted increasingly attention from both academia and industry. This paper presents a Parallel Random Forest (PRF) algorithm for big data on the Apache Spark platform. The PRF algorithm is optimized based on a hybrid approach combining data-parallel and task-parallel optimization. From the perspective of data-parallel optimization, a vertical data-partitioning method is performed to reduce the data communication cost effectively, and a data-multiplexing method is performed is performed to allow the training dataset to be reused and diminish the volume of data. From the perspective of task-parallel optimization, a dual parallel approach is carried out in the training process of RF, and a task Directed Acyclic Graph (DAG) is created according to the parallel training process of PRF and the dependence of the Resilient Distributed Datasets (RDD) objects. Then, different task schedulers are invoked for the tasks in the DAG. Moreover, to improve the algorithms accuracy for large, high-dimensional, and noisy data, we perform a dimension-reduction approach in the training process and a weighted voting approach in the prediction process prior to parallelization. Extensive experimental results indicate the superiority and notable advantages of the PRF algorithm over the relevant algorithms implemented by Spark MLlib and other studies in terms of the classification accuracy, performance, and scalability.



rate research

Read More

With the explosive increase of big data in industry and academic fields, it is necessary to apply large-scale data processing systems to analysis Big Data. Arguably, Spark is state of the art in large-scale data computing systems nowadays, due to its good properties including generality, fault tolerance, high performance of in-memory data processing, and scalability. Spark adopts a flexible Resident Distributed Dataset (RDD) programming model with a set of provided transformation and action operators whose operating functions can be customized by users according to their applications. It is originally positioned as a fast and general data processing system. A large body of research efforts have been made to make it more efficient (faster) and general by considering various circumstances since its introduction. In this survey, we aim to have a thorough review of various kinds of optimization techniques on the generality and performance improvement of Spark. We introduce Spark programming model and computing system, discuss the pros and cons of Spark, and have an investigation and classification of various solving techniques in the literature. Moreover, we also introduce various data management and processing systems, machine learning algorithms and applications supported by Spark. Finally, we make a discussion on the open issues and challenges for large-scale in-memory data processing with Spark.
The overall performance of the development of computing systems has been engrossed on enhancing demand from the client and enterprise domains. but, the intake of ever-increasing energy for computing systems has commenced to bound in increasing overall performance due to heavy electric payments and carbon dioxide emission. The growth in power consumption of server is increased continuously, and many researchers proposed, if this pattern repeats continuously, then the power consumption cost of a server over its lifespan would be higher than its hardware prices. The power intake troubles more for clusters, grids, and clouds, which encompass numerous thousand heterogeneous servers. Continuous efforts have been done to reduce the electricity intake of these massive-scale infrastructures. To identify the challenges and required future enhancements in the field of efficient energy consumption in Cloud Computing, it is necessary to synthesize and categorize the research and development done so far. In this paper, the authors discuss the reasons and problems associated with huge energy consumption by Cloud data centres and prepare a taxonomy of huge energy consumption problems and its related solutions. The authors cover all aspects of energy consumption by Cloud data centers and analyze many research papers to find the better solution for efficient energy consumption. This work gives an overall information regarding energy-consumption problems of Cloud data centres and energy-efficient solutions for this problem. The paper is concluded with a conversation of future enhancement and development in energy-efficient methods in Cloud Computing
With the era of big data, an explosive amount of information is now available. This enormous increase of Big Data in both academia and industry requires large-scale data processing systems. A large body of research is behind optimizing Sparks performance to make it state of the art, a fast and general data processing system. Many science and engineering fields have advanced with Big Data analytics, such as Biology, finance, and transportation. Intelligent transportation systems (ITS) gain popularity and direct benefit from the richness of information. The objective is to improve the safety and management of transportation networks by reducing congestion and incidents. The first step toward the goal is better understanding, modeling, and detecting congestion across a network efficiently and effectively. In this study, we introduce an efficient congestion detection model. The underlying network consists of 3017 segments in I-35, I-80, I-29, and I-380 freeways with an overall length of 1570 miles and averaged (0.4-0.6) miles per segment. The result of congestion detection shows the proposed method is 90% accurate while has reduced computation time by 99.88%.
181 - Qi Zhang , Ling Liu , Calton Pu 2018
Container technique is gaining increasing attention in recent years and has become an alternative to traditional virtual machines. Some of the primary motivations for the enterprise to adopt the container technology include its convenience to encapsulate and deploy applications, lightweight operations, as well as efficiency and flexibility in resources sharing. However, there still lacks an in-depth and systematic comparison study on how big data applications, such as Spark jobs, perform between a container environment and a virtual machine environment. In this paper, by running various Spark applications with different configurations, we evaluate the two environments from many interesting aspects, such as how convenient the execution environment can be set up, what are makespans of different workloads running in each setup, how efficient the hardware resources, such as CPU and memory, are utilized, and how well each environment can scale. The results show that compared with virtual machines, containers provide a more easy-to-deploy and scalable environment for big data workloads. The research work in this paper can help practitioners and researchers to make more informed decisions on tuning their cloud environment and configuring the big data applications, so as to achieve better performance and higher resources utilization.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا