ترغب بنشر مسار تعليمي؟ اضغط هنا

Exploiting peer group concept for adaptive and highly available services

48   0   0.0 ( 0 )
 نشر من قبل Fahd Ali Zahid
 تاريخ النشر 2003
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

This paper presents a prototype for redundant, highly available and fault tolerant peer to peer framework for data management. Peer to peer computing is gaining importance due to its flexible organization, lack of central authority, distribution of functionality to participating nodes and ability to utilize unused computational resources. Emergence of GRID computing has provided much needed infrastructure and administrative domain for peer to peer computing. The components of this framework exploit peer group concept to scope service and information search, arrange services and information in a coherent manner, provide selective redundancy and ensure availability in face of failure and high load conditions. A prototype system has been implemented using JXTA peer to peer technology and XML is used for service description and interfaces, allowing peers to communicate with services implemented in various platforms including web services and JINI services. It utilizes code mobility to achieve role interchange among services and ensure dynamic group membership. Security is ensured by using Public Key Infrastructure (PKI) to implement group level security policies for membership and service access.



قيم البحث

اقرأ أيضاً

547 - Lei Ni , Aaron Harwood 2007
Volunteer Computing, sometimes called Public Resource Computing, is an emerging computational model that is very suitable for work-pooled parallel processing. As more complex grid applications make use of work flows in their design and deployment it is reasonable to consider the impact of work flow deployment over a Volunteer Computing infrastructure. In this case, the inter work flow I/O can lead to a significant increase in I/O demands at the work pool server. A possible solution is the use of a Peer-to- Peer based parallel computing architecture to off-load this I/O demand to the workers; where the workers can fulfill some aspects of work flow coordination and I/O checking, etc. However, achieving robustness in such a large scale system is a challenging hurdle towards the decentralized execution of work flows and general parallel processes. To increase robustness, we propose and show the merits of using an adaptive checkpoint scheme that efficiently checkpoints the status of the parallel processes according to the estimation of relevant network and peer parameters. Our scheme uses statistical data observed during runtime to dynamically make checkpoint decisions in a completely de- centralized manner. The results of simulation show support for our proposed approach in terms of reduced required runtime.
Highly-available datastores are widely deployed for online applications. However, many online applications are not contented with the simple data access interface currently provided by highly-available datastores. Distributed transaction support is d emanded by applications such as large-scale online payment used by Alipay or Paypal. Current solutions to distributed transaction can spend more than half of the whole transaction processing time in distributed commit. An efficient atomic commit protocol is highly desirable. This paper presents the HACommit protocol, a logless one-phase commit protocol for highly-available systems. HACommit has transaction participants vote for a commit before the client decides to commit or abort the transaction; in comparison, the state-of-the-art practice for distributed commit is to have the client decide before participants vote. The change enables the removal of both the participant logging and the coordinator logging steps in the distributed commit process; it also makes possible that, after the client initiates the transaction commit, the transaction data is visible to other transactions within one communication roundtrip time (i.e., one phase). In the evaluation with extensive experiments, HACommit outperforms recent atomic commit solutions for highly-available datastores under different workloads. In the best case, HACommit can commit in one fifth of the time 2PC does.
Scalability and efficient global search in unstructured peer-to-peer overlays have been extensively studied in the literature. The global search comes at the expense of local interactions between peers. Most of the unstructured peer-to-peer overlays do not provide any performance guarantee. In this work we propose a novel Quality of Service enabled lookup for unstructured peer-to-peer overlays that will allow the users query to traverse only those overlay links which satisfy the given constraints. Additionally, it also improves the scalability by judiciously using the overlay resources. Our approach selectively forwards the queries using QoS metrics like latency, bandwidth, and overlay link status so as to ensure improved performance in a scenario where the degree of peer joins and leaves are high. User is given only those results which can be downloaded with the given constraints. Also, the protocol aims at minimizing the message overhead over the overlay network.
We study three capacity problems in the mobile telephone model, a network abstraction that models the peer-to-peer communication capabilities implemented in most commodity smartphone operating systems. The capacity of a network expresses how much sus tained throughput can be maintained for a set of communication demands, and is therefore a fundamental bound on the usefulness of a network. Because of this importance, wireless network capacity has been active area of research for the last two decades. The three capacity problems that we study differ in the structure of the communication demands. The first problem is pairwise capacity, where the demands are (source, destination) pairs. Pairwise capacity is one of the most classical definitions, as it was analyzed in the seminal paper of Gupta and Kumar on wireless network capacity. The second problem we study is broadcast capacity, in which a single source must deliver packets to all other nodes in the network. Finally, we turn our attention to all-to-all capacity, in which all nodes must deliver packets to all other nodes. In all three of these problems we characterize the optimal achievable throughput for any given network, and design algorithms which asymptotically match this performance. We also study these problems in networks generated randomly by a process introduced by Gupta and Kumar, and fully characterize their achievable throughput. Interestingly, the techniques that we develop for all-to-all capacity also allow us to design a one-shot gossip algorithm that runs within a polylogarithmic factor of optimal in every graph. This largely resolves an open question from previous work on the one-shot gossip problem in this model.
Scalability and security problems of the centralized architecture models in cyberphysical systems have great potential to be solved by novel blockchain based distributed models.A decentralized energy trading system takes advantage of various sources and effectively coordinates the energy to ensure optimal utilization of the available resources. It achieves that goal by managing physical, social and business infrastructures using technologies such as Internet of Things (IoT), cloud computing and network systems. Addressing the importance of blockchain-enabled energy trading in the context of cyberphysical systems, this article provides a thorough overview of the P2P energy trading and the utilization of blockchain to enhance the efficiency and the overall performance including the degree of decentralization, scalability and the security of the systems. Three blockchain based energy trading models have been proposed to overcome the technical challenges and market barriers for better adoption of this disruptive technology.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا