Do you want to publish a course? Click here

Quantum Computing: A Taxonomy, Systematic Review and Future Directions

89   0   0.0 ( 0 )
 Added by Rajkumar Buyya
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

Quantum computing is an emerging paradigm with the potential to offer significant computational advantage over conventional classical computing by exploiting quantum-mechanical principles such as entanglement and superposition. It is anticipated that this computational advantage of quantum computing will help to solve many complex and computationally intractable problems in several areas such as drug design, data science, clean energy, finance, industrial chemical development, secure communications, and quantum chemistry. In recent years, tremendous progress in both quantum hardware development and quantum software/algorithm have brought quantum computing much closer to reality. Indeed, the demonstration of quantum supremacy marks a significant milestone in the Noisy Intermediate Scale Quantum (NISQ) era - the next logical step being the quantum advantage whereby quantum computers solve a real-world problem much more efficiently than classical computing. As the quantum devices are expected to steadily scale up in the next few years, quantum decoherence and qubit interconnectivity are two of the major challenges to achieve quantum advantage in the NISQ era. Quantum computing is a highly topical and fast-moving field of research with significant ongoing progress in all facets. This article presents a comprehensive review of quantum computing literature, and taxonomy of quantum computing. Further, the proposed taxonomy is used to map various related studies to identify the research gaps. A detailed overview of quantum software tools and technologies, post-quantum cryptography and quantum computer hardware development to document the current state-of-the-art in the respective areas. We finish the article by highlighting various open challenges and promising future directions for research.



rate research

Read More

Containerization is a lightweight application virtualization technology, providing high environmental consistency, operating system distribution portability, and resource isolation. Existing mainstream cloud service providers have prevalently adopted container technologies in their distributed system infrastructures for automated application management. To handle the automation of deployment, maintenance, autoscaling, and networking of containerized applications, container orchestration is proposed as an essential research problem. However, the highly dynamic and diverse feature of cloud workloads and environments considerably raises the complexity of orchestration mechanisms. Machine learning algorithms are accordingly employed by container orchestration systems for behavior modelling and prediction of multi-dimensional performance metrics. Such insights could further improve the quality of resource provisioning decisions in response to the changing workloads under complex environments. In this paper, we present a comprehensive literature review of existing machine learning-based container orchestration approaches. Detailed taxonomies are proposed to classify the current researches by their common features. Moreover, the evolution of machine learning-based container orchestration technologies from the year 2016 to 2021 has been designed based on objectives and metrics. A comparative analysis of the reviewed techniques is conducted according to the proposed taxonomies, with emphasis on their key characteristics. Finally, various open research challenges and potential future directions are highlighted.
Recently, fog computing has been introduced as a modern distributed paradigm and complement to cloud computing to provide services. Fog system extends storing and computing to the edge of the network, which can solve the problem about service computing of the delay-sensitive applications remarkably besides enabling the location awareness and mobility support. Load balancing is an important aspect of fog networks that avoids a situation with some under-loaded or overloaded fog nodes. Quality of Service (QoS) parameters such as resource utilization, throughput, cost, response time, performance, and energy consumption can be improved with load balancing. In recent years, some researches in load balancing techniques in fog networks have been carried out, but there is no systematic review to consolidate these studies. This article reviews the load-balancing mechanisms systematically in fog computing in four classifications, including approximate, exact, fundamental, and hybrid methods (published between 2013 and August 2020). Also, this article investigates load balancing metrics with all advantages and disadvantages related to chosen load balancing mechanisms in fog networks. The evaluation techniques and tools applied for each reviewed study are explored as well. Additionally, the essential open challenges and future trends of these mechanisms are discussed.
Serverless computing has emerged as an attractive deployment option for cloud applications in recent times. The unique features of this computing model include, rapid auto-scaling, strong isolation, fine-grained billing options and access to a massive service ecosystem which autonomously handles resource management decisions. This model is increasingly being explored for deployments in geographically distributed edge and fog computing networks as well, due to these characteristics. Effective management of computing resources has always gained a lot of attention among researchers. The need to automate the entire process of resource provisioning, allocation, scheduling, monitoring and scaling, has resulted in the need for specialized focus on resource management under the serverless model. In this article, we identify the major aspects covering the broader concept of resource management in serverless environments and propose a taxonomy of elements which influence these aspects, encompassing characteristics of system design, workload attributes and stakeholder expectations. We take a holistic view on serverless environments deployed across edge, fog and cloud computing networks. We also analyse existing works discussing aspects of serverless resource management using this taxonomy. This article further identifies gaps in literature and highlights future research directions for improving capabilities of this computing model.
In light of the emergence of deep reinforcement learning (DRL) in recommender systems research and several fruitful results in recent years, this survey aims to provide a timely and comprehensive overview of the recent trends of deep reinforcement learning in recommender systems. We start with the motivation of applying DRL in recommender systems. Then, we provide a taxonomy of current DRL-based recommender systems and a summary of existing methods. We discuss emerging topics and open issues, and provide our perspective on advancing the domain. This survey serves as introductory material for readers from academia and industry into the topic and identifies notable opportunities for further research.
Cloud computing has rapidly emerged as model for delivering Internet-based utility computing services. In cloud computing, Infrastructure as a Service (IaaS) is one of the most important and rapidly growing fields. Cloud providers provide users/machines resources such as virtual machines, raw (block) storage, firewalls, load balancers, and network devices in this service model. One of the most important aspects of cloud computing for IaaS is resource management. Scalability, quality of service, optimum utility, reduced overheads, increased throughput, reduced latency, specialised environment, cost effectiveness, and a streamlined interface are some of the advantages of resource management for IaaS in cloud computing. Traditionally, resource management has been done through static policies, which impose certain limitations in various dynamic scenarios, prompting cloud service providers to adopt data-driven, machine-learning-based approaches. Machine learning is being used to handle a variety of resource management tasks, including workload estimation, task scheduling, VM consolidation, resource optimization, and energy optimization, among others. This paper provides a detailed review of challenges in ML-based resource management in current research, as well as current approaches to resolve these challenges, as well as their advantages and limitations. Finally, we propose potential future research directions based on identified challenges and limitations in current research.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا