Do you want to publish a course? Click here

Available Bandwidth Estimation in Computer Networks Using Single Probing Train

تقدير عرض الحزمة المتاحة في الشبكات الحاسوبية باستخدام قطار سبر وحيد

809   0   13   0 ( 0 )
 Publication date 2011
and research's language is العربية
 Created by Shamra Editor




Ask ChatGPT about the research

Available bandwidth has a significant impact on the performance of many applications that run over computer networks. Therefore, many researchers pay attention to this issue through the study of the possibility of measuring the available bandwidth, and disseminating tools for measuring this metric. We present a method to estimate the available bandwidth for a path, by building, sending, and receiving probe packets. We measure the time gap between probing packets before sending and after receiving, then we estimate the available bandwidth. This method relies on an easy and fast algorithm. Applications can use this method before they start exchanging data over the Internet.

References used
Ravi Prasad، Constantinos Dovrolis، Margareth Murray، and Kimberly C. Cla_y. Bandwidth estimation: Metrics، measurements techniques، and tools. IEEE Network، November 2003
(Strauss، J.، Katabi، D.، Kaashoek، F.: A measurement study of available bandwidth estimation tools. In: Proceedings of the 3rd ACM SIGCOMM conference on Internet measurement IMC’03 (2003
Behrouz A. Forouzan. TCP/IP Protocol Suite. McGraw-Hill Professional, 2002
rate research

Read More

Software Defined Networks (SDN) is the qualitative movement in the field of networks due to that fact that it separates the control elements from the routing elements, and the function of the routing elements was limited to the implementation of the decisions that are sent to it by the controller through the OpenFlow Protocol (OF) which is mainly used in SDN. We explain in this paper the benefit of the new concept which is presented by SDN and it makes network management easier, so instead of writing the rules on each device, we program the application in the controller, and the infrastructure devices run the received commands from the controller. In order to achieve the best performance of this technology, a Quality of Service (QoS) must be applied within it, where it includes several criteria, the most important are the used bandwidth, delay, packet loss and jitter. The most important of these criteria is the bandwidth, because by improving this standard, we can improve the rest of the other criteria. Therefore, in this paper, we provide the necessary improvement on the RYUcontroller to use the best bandwidth, which improves the quality of service in SDN.
Providing a good Quality of Service (QoS) for all users is a big challenge in Cellular Networks, as soon as the number of users increases the demand on Internet service increases too especially with using the current technology of today. While on mov e a user needs Internet connectivity with good Quality of Service and minimum call dropping probability. Cellular IP presents a good solution mobility as it supports highly mobile users, users' needs are becoming larger and more multifarious (files downloading, video streaming, sending an e-mail….) there for the need for efficient way to improve QoS is necessity. Bandwidth is the most important factor in Cellular IP Networks, for improving QoS in Cellular IP Networks a model for bandwidth management is presented in this paper, the model presented here is based on borrowing bandwidth reserved to non-real-time users using Particle Swarm Optimization (PSO) the proposed model preserves a low bandwidth threshold for the ongoing non-real-time calls. This threshold is the security limit that keeps non-real-time calls from being dropped. This research models handoff process and proposes a technique that gives the lowest percentage of dropped and blocked hand offs. Simulation results show the efficacy of the proposed model.
Computer networks have evolved considerably in the past few years of big increases in mutual amounts of data across the network hand because of the increasing number of interconnected devices, which can exchange data as part of the network and this is what led to the emergence of what is known as the problems of congestion Studies showed about some of these problems that the largest reason is involved in the implementation of the transmission rules, and this led to the urgent of multiple types of protocols in the computer networks that needs to deal with different computer and communication systems, and many other applications, which often causes errors at the level of the bit and level of the packets, missing packets, duplicate packets, randomly received packets, and most importantly the appeared congestion in the network. This research aims to determine how to improve the performance of the network to get rid of the congestion by using advantages of the algorithms used to avoid congestion that may occur in the networks that rely TCP protocol . The goals of these algorithms is to reach stability in the network by working to achieve the principle of package saving. Also within this scope it has been studied, and compared some of the algorithms that used to avoid congestion in general, without relying on a specific protocol or specific service category.
This work aims to analyze the performance of Orthogonal Frequency Division Multiplexing (OFDM) applied in the fourth generation mobile networks and WiFi. Fuzzy logic technique is used in this study to analyze the problem of OFDM, taking into consi deration the modulation techniques applied in OFDM. Three input parameters in the fuzzy logic system are mainly considered: signal-to-noise ratio, the modulation degree and the number of sub-carriers. The output parameters are selected to be the bandwidth and bit error rate. This requires an analytical study to determine the optimal values of the input parameters used in this study. This means studying the membership of functions of each input and output parameter using fuzzy logic.
Standard train-dev-test splits used to benchmark multiple models against each other are ubiquitously used in Natural Language Processing (NLP). In this setup, the train data is used for training the model, the development set for evaluating different versions of the proposed model(s) during development, and the test set to confirm the answers to the main research question(s). However, the introduction of neural networks in NLP has led to a different use of these standard splits; the development set is now often used for model selection during the training procedure. Because of this, comparing multiple versions of the same model during development leads to overestimation on the development data. As an effect, people have started to compare an increasing amount of models on the test data, leading to faster overfitting and expiration'' of our test sets. We propose to use a tune-set when developing neural network methods, which can be used for model picking so that comparing the different versions of a new model can safely be done on the development data.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا