Do you want to publish a course? Click here

Available Bandwidth Estimation in Computer Networks Using Single Probing Train

تقدير عرض الحزمة المتاحة في الشبكات الحاسوبية باستخدام قطار سبر وحيد

912   0   13   0 ( 0 )
 Publication date 2011
and research's language is العربية
 Created by Shamra Editor




Ask ChatGPT about the research

Available bandwidth has a significant impact on the performance of many applications that run over computer networks. Therefore, many researchers pay attention to this issue through the study of the possibility of measuring the available bandwidth, and disseminating tools for measuring this metric. We present a method to estimate the available bandwidth for a path, by building, sending, and receiving probe packets. We measure the time gap between probing packets before sending and after receiving, then we estimate the available bandwidth. This method relies on an easy and fast algorithm. Applications can use this method before they start exchanging data over the Internet.


Artificial intelligence review:
Research summary
تتعلق هذه الدراسة بقياس عرض الحزمة المتاحة في الشبكات الحاسوبية باستخدام تقنية قطار السبر. عرض الحزمة المتاحة هو مقدار السعة غير المستخدم في وصلة معينة في وقت محدد، وهو متغير يعتمد على الحمل المروري العابر. تعتمد الدراسة على إرسال واستقبال مجموعة من رزم السبر بفواصل زمنية محددة، وقياس التغير في هذه الفواصل الزمنية لتقدير عرض الحزمة المتاحة. تم تصميم خوارزمية قياس تعتمد على إرسال قطار سبر وحيد، مما يقلل من زمن التقدير ويزيد من دقة النتائج. تم اختبار هذه الطريقة باستخدام أدوات مختلفة مثل Spruce وPathload، وأظهرت النتائج أن قطار السبر يعطي نتائج دقيقة خاصة عند القيم المنخفضة لعرض الحزمة المتاحة. تم تنفيذ التجارب في بيئة متحكم بها لضمان دقة النتائج، وأظهرت أن الأداة Pathload تعطي أفضل النتائج ولكنها تحتاج إلى زمن أطول، في حين أن قطار السبر يوفر توازناً جيداً بين الدقة والسرعة.
Critical review
دراسة نقدية: تقدم هذه الدراسة طريقة مبتكرة لقياس عرض الحزمة المتاحة باستخدام قطار سبر وحيد، وهو ما يعد خطوة مهمة نحو تحسين دقة وسرعة القياس في الشبكات الحاسوبية. ومع ذلك، يمكن ملاحظة بعض النقاط التي قد تحتاج إلى تحسين. أولاً، تعتمد الطريقة على افتراض بقاء عرض الحزمة المتاحة ثابتاً خلال زمن التقدير، وهو افتراض قد لا يكون دقيقاً في جميع الحالات. ثانياً، التجارب أجريت في بيئة متحكم بها، مما قد لا يعكس الظروف الحقيقية للشبكات الحاسوبية المتنوعة. أخيراً، قد يكون من المفيد مقارنة هذه الطريقة مع المزيد من الأدوات الأخرى المتاحة في السوق للحصول على صورة أشمل عن فعاليتها.
Questions related to the research
  1. ما هو عرض الحزمة المتاحة؟

    عرض الحزمة المتاحة هو مقدار السعة غير المستخدم في وصلة معينة في وقت محدد، وهو متغير يعتمد على الحمل المروري العابر.

  2. كيف يتم قياس عرض الحزمة المتاحة في هذه الدراسة؟

    يتم قياس عرض الحزمة المتاحة عبر إرسال واستقبال مجموعة من رزم السبر بفواصل زمنية محددة، وقياس التغير في هذه الفواصل الزمنية لتقدير عرض الحزمة المتاحة.

  3. ما هي الأدوات المستخدمة في التجارب؟

    تم استخدام أدوات مثل Spruce وPathload في التجارب.

  4. ما هي المزايا التي يقدمها قطار السبر مقارنة بالأدوات الأخرى؟

    يوفر قطار السبر توازناً جيداً بين الدقة والسرعة، ويعطي نتائج دقيقة خاصة عند القيم المنخفضة لعرض الحزمة المتاحة.


References used
Ravi Prasad، Constantinos Dovrolis، Margareth Murray، and Kimberly C. Cla_y. Bandwidth estimation: Metrics، measurements techniques، and tools. IEEE Network، November 2003
(Strauss، J.، Katabi، D.، Kaashoek، F.: A measurement study of available bandwidth estimation tools. In: Proceedings of the 3rd ACM SIGCOMM conference on Internet measurement IMC’03 (2003
Behrouz A. Forouzan. TCP/IP Protocol Suite. McGraw-Hill Professional, 2002
rate research

Read More

Software Defined Networks (SDN) is the qualitative movement in the field of networks due to that fact that it separates the control elements from the routing elements, and the function of the routing elements was limited to the implementation of the decisions that are sent to it by the controller through the OpenFlow Protocol (OF) which is mainly used in SDN. We explain in this paper the benefit of the new concept which is presented by SDN and it makes network management easier, so instead of writing the rules on each device, we program the application in the controller, and the infrastructure devices run the received commands from the controller. In order to achieve the best performance of this technology, a Quality of Service (QoS) must be applied within it, where it includes several criteria, the most important are the used bandwidth, delay, packet loss and jitter. The most important of these criteria is the bandwidth, because by improving this standard, we can improve the rest of the other criteria. Therefore, in this paper, we provide the necessary improvement on the RYUcontroller to use the best bandwidth, which improves the quality of service in SDN.
Providing a good Quality of Service (QoS) for all users is a big challenge in Cellular Networks, as soon as the number of users increases the demand on Internet service increases too especially with using the current technology of today. While on mov e a user needs Internet connectivity with good Quality of Service and minimum call dropping probability. Cellular IP presents a good solution mobility as it supports highly mobile users, users' needs are becoming larger and more multifarious (files downloading, video streaming, sending an e-mail….) there for the need for efficient way to improve QoS is necessity. Bandwidth is the most important factor in Cellular IP Networks, for improving QoS in Cellular IP Networks a model for bandwidth management is presented in this paper, the model presented here is based on borrowing bandwidth reserved to non-real-time users using Particle Swarm Optimization (PSO) the proposed model preserves a low bandwidth threshold for the ongoing non-real-time calls. This threshold is the security limit that keeps non-real-time calls from being dropped. This research models handoff process and proposes a technique that gives the lowest percentage of dropped and blocked hand offs. Simulation results show the efficacy of the proposed model.
Computer networks have evolved considerably in the past few years of big increases in mutual amounts of data across the network hand because of the increasing number of interconnected devices, which can exchange data as part of the network and this is what led to the emergence of what is known as the problems of congestion Studies showed about some of these problems that the largest reason is involved in the implementation of the transmission rules, and this led to the urgent of multiple types of protocols in the computer networks that needs to deal with different computer and communication systems, and many other applications, which often causes errors at the level of the bit and level of the packets, missing packets, duplicate packets, randomly received packets, and most importantly the appeared congestion in the network. This research aims to determine how to improve the performance of the network to get rid of the congestion by using advantages of the algorithms used to avoid congestion that may occur in the networks that rely TCP protocol . The goals of these algorithms is to reach stability in the network by working to achieve the principle of package saving. Also within this scope it has been studied, and compared some of the algorithms that used to avoid congestion in general, without relying on a specific protocol or specific service category.
This work aims to analyze the performance of Orthogonal Frequency Division Multiplexing (OFDM) applied in the fourth generation mobile networks and WiFi. Fuzzy logic technique is used in this study to analyze the problem of OFDM, taking into consi deration the modulation techniques applied in OFDM. Three input parameters in the fuzzy logic system are mainly considered: signal-to-noise ratio, the modulation degree and the number of sub-carriers. The output parameters are selected to be the bandwidth and bit error rate. This requires an analytical study to determine the optimal values of the input parameters used in this study. This means studying the membership of functions of each input and output parameter using fuzzy logic.
Standard train-dev-test splits used to benchmark multiple models against each other are ubiquitously used in Natural Language Processing (NLP). In this setup, the train data is used for training the model, the development set for evaluating different versions of the proposed model(s) during development, and the test set to confirm the answers to the main research question(s). However, the introduction of neural networks in NLP has led to a different use of these standard splits; the development set is now often used for model selection during the training procedure. Because of this, comparing multiple versions of the same model during development leads to overestimation on the development data. As an effect, people have started to compare an increasing amount of models on the test data, leading to faster overfitting and expiration'' of our test sets. We propose to use a tune-set when developing neural network methods, which can be used for model picking so that comparing the different versions of a new model can safely be done on the development data.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا