Do you want to publish a course? Click here

Distributed Data Verification Protocols in Cloud Computing

107   0   0.0 ( 0 )
 Added by Priodyuti Pradhan
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

Recently, storage of huge volume of data into Cloud has become an effective trend in modern day Computing due to its dynamic nature. After storing, users deletes their original copy of the data files. Therefore users, cannot directly control over that data. This lack of control introduces security issues in Cloud data storage, one of the most important security issue is integrity of the remotely stored data. Here, we propose a Distributed Algorithmic approach to address this problem with publicly probabilistic verifiable scheme. Due to heavy workload at the Third Party Auditor side, we distributes the verification task among various SUBTPAs. We uses Sobol Random Sequences to generates the random block numbers that maintains the uniformity property. In addition, our method provides uniformity for each subtasks also. To makes each subtask uniform, we uses some analytical approach. For this uniformity, our protocols verify the integrity of the data very efficiently and quickly. Also, we provides special care about critical data by using Overlap Task Distribution Keys.



rate research

Read More

Quantum computing holds a great promise and this work proposes to use new quantum data networks (QDNs) to connect multiple small quantum computers to form a cluster. Such a QDN differs from existing QKD networks in that the former must deliver data qubits reliably within itself. Two types of QDNs are studied, one using teleportation and the other using tell-and-go (TAG) to exchange quantum data. Two corresponding quantum transport protocols (QTPs), named Tele-QTP and TAG-QTP, are proposed to address many unique design challenges involved in reliable delivery of data qubits, and constraints imposed by quantum physics laws such as the no-cloning theorem, and limited availability of quantum memory. The proposed Tele-QTP and TAG-QTP are the first transport layer protocols for QDNs, complementing other works on the network protocol stack. Tele-QTP and TAG-QTP have novel mechanisms to support congestion-free and reliable delivery of streams of data qubits by managing the limited quantum memory at end hosts as well as intermediate nodes. Both analysis and extensive simulations show that the proposed QTPs can achieve a high throughput and fairness. This study also offers new insights into potential tradeoffs involved in using the two methods, teleportation and TAG, in two types of QDNs.
The increasing popularity of cloud computing has resulted in a proliferation of data centers. Effective placement of data centers improves network performance and minimizes clients perceived latency. The problem of determining the optimal placement of data centers in a large network is a classical uncapacitated $k$-median problem. Traditional works have focused on centralized algorithms, which requires knowledge of the overall network topology and information about the customers service demands. Moreover, centralized algorithms are computationally expensive and do not scale well with the size of the network. We propose a fully distributed algorithm with linear complexity to optimize the locations of data centers. The proposed algorithm utilizes an iterative two-step optimization approach. Specifically, in each iteration, it first partitions the whole network into $k$ regions through a distributed partitioning algorithm; then within each region, it determines the local approximate optimal location through a distributed message-passing algorithm. When the underlying network is a tree topology, we show that the overall cost is monotonically decreasing between successive iterations and the proposed algorithm converges in a finite number of iterations. Extensive simulations on both synthetic and real Internet topologies show that the proposed algorithm achieves performance comparable with that of centralized algorithms that require global information and have higher computational complexity.
Cloud Computing is rising fast, with its data centres growing at an unprecedented rate. However, this has come with concerns over privacy, efficiency at the expense of resilience, and environmental sustainability, because of the dependence on Cloud vendors such as Google, Amazon and Microsoft. Our response is an alternative model for the Cloud conceptualisation, providing a paradigm for Clouds in the community, utilising networked personal computers for liberation from the centralised vendor model. Community Cloud Computing (C3) offers an alternative architecture, created by combing the Cloud with paradigms from Grid Computing, principles from Digital Ecosystems, and sustainability from Green Computing, while remaining true to the original vision of the Internet. It is more technically challenging than Cloud Computing, having to deal with distributed computing issues, including heterogeneous nodes, varying quality of service, and additional security constraints. However, these are not insurmountable challenges, and with the need to retain control over our digital lives and the potential environmental consequences, it is a challenge we must pursue.
Runtime verification is a computing analysis paradigm based on observing a system at runtime (to check its expected behaviour) by means of monitors generated from formal specifications. Distributed runtime verification is runtime verification in connection with distributed systems: it comprises both monitoring of distributed systems and using distributed systems for monitoring. Aggregate computing is a programming paradigm based on a reference computing machine that is the aggregate collection of devices that cooperatively carry out a computational process: the details of behaviour, position and number of devices are largely abstracted away, to be replaced with a space-filling computational environment. In this position paper we argue, by means of simple examples, that aggregate computing is particularly well suited for implementing distributed monitors. Our aim is to foster further research on how to generate aggregate computing monitors from suitable formal specifications.
Cloud Computing is rising fast, with its data centres growing at an unprecedented rate. However, this has come with concerns of privacy, efficiency at the expense of resilience, and environmental sustainability, because of the dependence on Cloud vendors such as Google, Amazon, and Microsoft. Community Cloud Computing makes use of the principles of Digital Ecosystems to provide a paradigm for Clouds in the community, offering an alternative architecture for the use cases of Cloud Computing. It is more technically challenging to deal with issues of distributed computing, such as latency, differential resource management, and additional security requirements. However, these are not insurmountable challenges, and with the need to retain control over our digital lives and the potential environmental consequences, it is a challenge we must pursue.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا