If a is too large, then rtt_est does not react quick enough to changes
in the network.
If a it too small, then rtt_est changes with transient increases of
rtt_measured.
Both is undesirable.
rtt_est is insufficient for retransmit time-outs, because its variance is not considered. -> [Jac88]
Problem with roundtrip time estimation:
If a packet was retransmitted, the ack of this packets does not tell
whether it acknowledges the original packet or the retransmitted packet.
-> May lead to wrong rtt_est.
Tries to keep load of the bottle neck link at the knee of the curve, i.e. the point, where almost the highest possible throughput is achieved, but the delay is still very low.
One bit in the packet header to notify the sending user of congestion.
The devil is in the details:
* Use of a hysteresis or not?
* When should the user decrease or increase the load? A sufficient
number of packets is required.
Drawback:
The router has to do some work: Recalculating the checksum(s) of the
packet, when it sets the congestion bit.
Tactics:
* Router mechanism to enforce congestion control (this paper)
* Flow isolation
* Pricing policies (hard to implement, therefore not applied by now)
Penalizes bursty connections
Bursts may frequently but transient fill the queue
-> Probability is higher, that packets are dropped from a bursty flow.
If the average queue size is above the maximum threshold, then drop every packet.
[FJ] drop packets instead of mark packets in their experiments.
* Compliant with existing TCP implementations.
* Dropping enforces punishment of misbehaving users.
* But, if the clients are cooperative then marking will work well.
q_avg = a * q_avg_old + (1 -a) * q_measured
([RJ90]: average does not work in their method)
Choice of a:
* If a is too high: responds to transient behavior
* If a is too low: responds to slow to changes
* Gateway can absorb bursts up to a maximum/desired rate
* The predictive aspect of the calculation of q_avg is important.
Bad: mark all packets with probability p
* X: interarrival time between two marked packets
* Pr[X = i] = (1 - p) ^ (i - 1) * p geometric distribution
-> Several packets will be marked closed to each other, then their
will be a large gap.
-> Similar to drop-tail: unfair and possibility of synchronization
effects
Better: X is uniformly distributed between {1, ..., 2/p}
* Probability to mark packet at t + 1: p / 2
* Probability to mark packet at t + 2: 1 / (2 / p - 1)
-> Marks are "smooth" distributed
Why not marking packets at a constant rate, i.e. no randomization?
* To break possible synchronization/cycles: because the net traffic
is not completely randomized, there is some correlation between flows.
* This is true for their simulations, but in real networks ??? Perhaps
a constant rate would work well.