Important Notice: Our web hosting provider recently started charging us for additional visits, which was unexpected. In response, we're seeking donations. Depending on the situation, we may explore different monetization options for our Community and Expert Contributors. It's crucial to provide more returns for their expertise and offer more Expert Validated Answers or AI Validated Answers. Learn more about our hosting issue here.

Why delay-based congestion control?

congestion CONTROL
0
Posted

Why delay-based congestion control?

0

Delay-based congestion control has been proposed since the late 80s by Jain and many others, notably Brakmo and Peterson in TCP Vegas. We believe its advantage over loss-based approach is small at low speed, but decisive at high speed. It has been pointed out that delay can be a poor or untimely predictor of packet loss (e.g., see Martin, Nillson and Rhee’s paper in Trans. on Networking, 11(3):356-369, June 2003). This does not mean that it is futile to use delay as a measure of congestion, but rather, that using a delay-based algorithm to predict loss in the hope of helping a loss-based algorithm adjust its window is the wrong approach to address problems at large windows. Instead, a different approach that fully exploits delay as a congestion measure, augmented with loss information, is needed. Vegas and FAST explore such an approach. Delay as a congestion measure has two advantages. First, each measurement of loss (whether a packet is lost or not) provides 1 bit of congestion inform

What is your question?

*Sadly, we had to bring back ads too. Hopefully more targeted.

Experts123