Although generally dropping policies is not applicable for TelePresence flows, these do play an important part in the overall QoS policy design on TelePresence networks. Dropping tools are complementary to (and dependent on) queuing tools; specifically, queuing algorithms manage the front of a queue (that is, how a packet exits a queue), whereas congestion avoidance mechanisms manage the tail of a queue (that is, how a packet enters a queue).
Dropping tools, sometimes called congestion avoidance mechanisms, are designed to optimize TCP-based traffic. TCP has built-in flow control mechanisms that operate by increasing the transmission rates of traffic flows until packet loss occurs. At this point, TCP abruptly squelches the transmission rate and gradually begins to ramp the transmission rates higher again. Incidentally, this behavior makes a strong case against the statement that “QoS isn’t necessary; just throw more bandwidth at it.” Because if left unchecked, lengthy TCP sessions (as are typical with bulk data and scavenger applications) will consume any and all available bandwidth, simply due to the nature of TCP windowing.
When no congestion avoidance algorithms are enabled on an interface, the interface is said to tail drop. That is, after the queuing buffers have filled, all other packets are dropped as they arrive.
In a constricted channel, such as in a WAN or VPN, all the TCP connections eventually synchronize with each other as they compete for the channel. Without congestion avoidance mechanisms, they all ramp up together, lose packets together, and then back off together. This behavior is referred to as global synchronization. In effect, waves of TCP traffic flow through the network nodes, with packets overflowing the buffers at each wave peak and lulls in traffic between the waves.
Figure 1 illustrates TCP global synchronization behavior attributable to tail-dropping and the suboptimal effect this behavior has on bandwidth utilization.
Random Early Detect (RED) counters the effects of TCP global synchronization by randomly dropping packets before the queues fill to capacity.
Instead of waiting for queuing buffers to fill before dropping packets, RED causes the router to monitor the buffer depth and perform early discards (drops) on random packets when the defined queue threshold has been exceed.
RED drops occur within the operational bounds of TCP retry timers, which slow the transmission rates of the sessions but prevent these from slow-starting. Thus RED optimizes network throughput of TCP sessions.
Because UDP does not have any retry logic, congestion avoidance techniques such as RED (and variants) do not optimize UDP-based traffic.
Note |
Cisco IOS Software does not (directly) support Random Early Detect (RED), only Weighted-RED (WRED), discussed in the next section. However, if all packets assigned to a WRED-enabled queue have the same IP Precedence or DSCP markings, the effective policy is simply RED.
|
No comments:
Post a Comment