Do you want to publish a course? Click here

Efficient Batch Update of Unique Identifiers in a Distributed Hash Table for Resources in a Mobile Host

341   0   0.0 ( 0 )
 Added by Yoo Chung
 Publication date 2010
and research's language is English
 Authors Yoo Chung




Ask ChatGPT about the research

Resources in a distributed system can be identified using identifiers based on random numbers. When using a distributed hash table to resolve such identifiers to network locations, the straightforward approach is to store the network location directly in the hash table entry associated with an identifier. When a mobile host contains a large number of resources, this requires that all of the associated hash table entries must be updated when its network address changes. We propose an alternative approach where we store a host identifier in the entry associated with a resource identifier and the actual network address of the host in a separate host entry. This can drastically reduce the time required for updating the distributed hash table when a mobile host changes its network address. We also investigate under which circumstances our approach should or should not be used. We evaluate and confirm the usefulness of our approach with experiments run on top of OpenDHT.



rate research

Read More

This paper proposes a client selection method for federated learning (FL) when the computation and communication resource of clients cannot be estimated; the method trains a machine learning (ML) model using the rich data and computational resources of mobile clients without collecting their data in central systems. Conventional FL with client selection estimates the required time for an FL round from a given clients computation power and throughput and determines a client set to reduce time consumption in FL rounds. However, it is difficult to obtain accurate resource information for all clients before the FL process is conducted because the available computation and communication resources change easily based on background computation tasks, background traffic, bottleneck links, etc. Consequently, the FL operator must select clients through exploration and exploitation processes. This paper proposes a multi-armed bandit (MAB)-based client selection method to solve the exploration and exploitation trade-off and reduce the time consumption for FL in mobile networks. The proposed method balances the selection of clients for which the amount of resources is uncertain and those known to have a large amount of resources. The simulation evaluation demonstrated that the proposed scheme requires less learning time than the conventional method in the resource fluctuating scenario.
The fast growth of Internet-connected embedded devices demands for new capabilities at the network edge. These new capabilities are local processing, fast communications, and resource virtualization. The current work aims to address the previous capabilities by designing and deploying a new proposal, which offers on-demand activation of offline IoT fog computing assets via a Software Defined Networking (SDN) based solution combined with containerization and sensor virtualization. We present and discuss performance and functional outcomes from emulated tests made on our proposal. Analysing the performance results, the system latency has two parts. The first part is about the delay induced by limitations on the networking resources. The second part of the system latency is due to the on-demand activation of the required processing resources, which are initially powered off towards a more sustainable system operation. In addition, analysing the functional results, when a real IoT protocol is used, we evidence our proposal viability to be deployed with the necessary orchestration in distributed scenarios involving embedded devices, actuators, controllers, and brokers at the network edge.
Due to explosive growth of online video content in mobile wireless networks, in-network caching is becoming increasingly important to improve the end-user experience and reduce the Internet access cost for mobile network operators. However, caching is a difficult problem due to the very large number of online videos and video requests,limited capacity of caching nodes, and limited bandwidth of in-network links. Existing solutions that rely on static configurations and average request arrival rates are insufficient to handle dynamic request patterns effectively. In this paper, we propose a dynamic collaborative video caching framework to be deployed in mobile networks. We decompose the caching problem into a content placement subproblem and a source-selection subproblem. We then develop SRS (System capacity Reservation Strategy) to solve the content placement subproblem, and LinkShare, an adaptive traffic-aware algorithm to solve the source selection subproblem. Our framework supports congestion avoidance and allows merging multiple requests for the same video into one request. We carry extensive simulations to validate the proposed schemes. Simulation results show that our SRS algorithm achieves performance within 1-3% of the optimal values and LinkShare significantly outperforms existing solutions.
Network Function Virtualization (NFV) and Service Function Chaining (SFC) have been widely used to enable flexible and agile network management. To enhance reliability, some research has proposed to deploy backup function instances for prompt recovery when a primary instance fails. While most of the recent studies focus on speeding up recovery, less attention has been paid to the problem of minimizing the state update cost. In this work, we present PiggyBackup (Piggyback-based Backup), an efficient backup instance deployment and update protocol. Our key idea is to reuse the existing service chains traversing through servers in a network to help piggyback the update information. By doing this, we eliminate the header overhead and reduce the amount of update traffic significantly. To realize such a piggyback-based update more efficiently, we investigate the backup instance deployment and chain selection problems to enhance piggybacking opportunities and reduce the forwarding hop counts with explicit consideration of the distribution of service chains. Our simulation results show that PiggyBackup reduces the average overall update overhead by 47.65% and 39.56%, respectively, in a fat-tree topology as compared to random deployment and shortest path based deployment.
This paper comprehensively studies a content-centric mobile network based on a preference learning framework, where each mobile user is equipped with a finite-size cache. We consider a practical scenario where each user requests a content file according to its own preferences, which is motivated by the existence of heterogeneity in file preferences among different users. Under our model, we consider a single-hop-based device-to-device (D2D) content delivery protocol and characterize the average hit ratio for the following two file preference cases: the personalized file preferences and the common file preferences. By assuming that the model parameters such as user activity levels, user file preferences, and file popularity are unknown and thus need to be inferred, we present a collaborative filtering (CF)-based approach to learn these parameters. Then, we reformulate the hit ratio maximization problems into a submodular function maximization and propose two computationally efficient algorithms including a greedy approach to efficiently solve the cache allocation problems. We analyze the computational complexity of each algorithm. Moreover, we analyze the corresponding level of the approximation that our greedy algorithm can achieve compared to the optimal solution. Using a real-world dataset, we demonstrate that the proposed framework employing the personalized file preferences brings substantial gains over its counterpart for various system parameters.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا