Do you want to publish a course? Click here

Can Realistic BitTorrent Experiments Be Performed on Clusters?

132   0   0.0 ( 0 )
 Added by Ashwin Rao
 Publication date 2010
and research's language is English
 Authors Ashwin Rao




Ask ChatGPT about the research

Network latency and packet loss are considered to be an important requirement for realistic evaluation of Peer-to-Peer protocols. Dedicated clusters, such as Grid5000, do not provide the variety of network latency and packet loss rates that can be found in the Internet. However, compared to the experiments performed on testbeds such as PlanetLab, the experiments performed on dedicated clusters are reproducible, as the computational resources are not shared. In this paper, we perform experiments to study the impact of network latency and packet loss on the time required to download a file using BitTorrent. In our experiments, we observe a less than 15% increase on the time required to download a file when we increase the round-trip time between any two peers, from 0 ms to 400 ms, and the packet loss rate, from 0% to 5%. Our main conclusion is that the underlying network latency and packet loss have a marginal impact on the time required to download a file using BitTorrent. Hence, dedicated clusters such as Grid5000 can be safely used to perform realistic and reproducible BitTorrent experiments.



rate research

Read More

136 - Ashwin Rao 2010
In this paper, we study the impact of network latency on the time required to download a file distributed using BitTorrent. This study is essential to understand if testbeds can be used for experimental evaluation of BitTorrent. We observe that the network latency has a marginal impact on the time required to download a file; hence, BitTorrent experiments can performed on testbeds.
Some BitTorrent users are running BitTorrent on top of Tor to preserve their privacy. In this extended abstract, we discuss three different attacks to reveal the IP address of BitTorrent users on top of Tor. In addition, we exploit the multiplexing of streams from different applications into the same circuit to link non-BitTorrent applications to revealed IP addresses.
133 - B. Beckford , A. Chiba , D. Doi 2012
An experiment designed to investigate the strangeness photoproduction process using a tagged photon beam in the energy range of 0.90 -1.08 GeV incident on a liquid deuterium target was successfully performed. The purpose of the experiment was to measure the production of neutral kaons and the lambda particles on a deuteron. The generation of photo produced particles was verified by the measurement of their decayed charged particles in the Neutral Kaon Spectrometer 2. The reconstructed invariant mass distributions were achieved by selecting events where two or more particles tracks were identified. Preliminary results are presented here.
146 - Stevens Le Blond 2010
This paper presents a set of exploits an adversary can use to continuously spy on most BitTorrent users of the Internet from a single machine and for a long period of time. Using these exploits for a period of 103 days, we collected 148 million IPs downloading 2 billion copies of contents. We identify the IP address of the content providers for 70% of the BitTorrent contents we spied on. We show that a few content providers inject most contents into BitTorrent and that those content providers are located in foreign data centers. We also show that an adversary can compromise the privacy of any peer in BitTorrent and identify the big downloaders that we define as the peers who subscribe to a large number of contents. This infringement on users privacy poses a significant impediment to the legal adoption of BitTorrent.
On many social networking web sites such as Facebook and Twitter, resharing or reposting functionality allows users to share others content with their own friends or followers. As content is reshared from user to user, large cascades of reshares can form. While a growing body of research has focused on analyzing and characterizing such cascades, a recent, parallel line of work has argued that the future trajectory of a cascade may be inherently unpredictable. In this work, we develop a framework for addressing cascade prediction problems. On a large sample of photo reshare cascades on Facebook, we find strong performance in predicting whether a cascade will continue to grow in the future. We find that the relative growth of a cascade becomes more predictable as we observe more of its reshares, that temporal and structural features are key predictors of cascade size, and that initially, breadth, rather than depth in a cascade is a better indicator of larger cascades. This prediction performance is robust in the sense that multiple distinct classes of features all achieve similar performance. We also discover that temporal features are predictive of a cascades eventual shape. Observing independent cascades of the same content, we find that while these cascades differ greatly in size, we are still able to predict which ends up the largest.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا