Papers: J.-C. Bolot, End-to-End
Packet Delay and Loss Behavior in the Internet.
V. Paxson, End-to-End Routing Behavior in the Internet
Lecture date: 10/5/99
Prof. John Byers
Scribe: Igor Stubailo
Setup, data analysis strategy.
What's so important about characterization?
Rather than to do measurements it's better to model a situation. This paper is one of the first important papers on measurements. And numbers given there shows quite interesting results. Authors made their own assumption based only on their measurements.
The main setup is the following:
In each experiment the source sends UDP probes from A to B at regular intervals.
If we denote by s_n the time at which packet n is sent by the source, by r_n the time at which it is received by the source and rtt_n the packet round trip delay then the interval between the send times of two successive packets is: d = s_n+1 - s_n for all n. Where d is fluctuating between traces. Also RTT: rtt (n) = r_n - s_n and rtt = 0 if a packet is lost.
What did they get from that setup? The following graph shows a time series plot of measured round trip delays as a function of n:
Analysis of packet delay.
To simplify analysis and to understand the structure of the phase plot
on Fig.2 it is convenient to introduce a simple model which captures
the essential features of our experiments.
where D is a constant (propagation) delay to model the fixed component of the round trip delay of the probe packets. Then if w_n is waiting time and p/m is transmission time, we are getting the following expression:
Crucial assumptions: their is only one bottleneck (very simplification), but a lot of people find that model as a quite reasonable. Though this technique is not applicable to a system with multiple bottlenecks.
We can plot rtt in the following way:
From this plot : rtt -> min when D-> min.
So, we have light load and heavy load.
Light load: w_n+1 = w_n + e_n , where e_n is a random process with mean 0 and low variance.
Heavy load: what happens if two consecutive probes are injected?
As we see, in case of heavy load packets n+2, n+3
... will cause a probe compression.
And what with waiting times?
w_n+1 = w_n
+ B/m - it's true when n sees no queue;
B/m is the delay associated with transmitting
a packet of size B bit. Also: n sees an empty queue,
w_n = 0 ; n+1 sees only Br,
B/m = w_n .
The other consequence:
n+2, n+3, n+4 are going
to be compressed and w_n+1 = w_n
+ P/m - d , where
P/m is a delay at n+1.
When d is small, we observe a probe compression
phenomenon and waiting time is decreasing.
When d is large, we observe that the points
in the phase plane (Fig. 5) are scattered around the line rtt_n+1
= rtt_n .
What of those parameters can be actually calculated?
We know P, d => can compute
m , so the bottleneck B/w.
A few words about Fig.8:
Let's say (w_n+1 - w_n
+ d) = Q ,
if Q = 20 then d
= 20, w_n+1 - w_n = 20,
we experience no load;
if Q = 0 then we have probe
compression, the d is removed from the
queues;
if Q = 40 then there is a particular
frequent packet size by what you can estimate the
value of B (approx.
600 bytes).
The distances between picks are fixed and arrival of packets between
probes is exponential
distribution.
Fig.9 shows the distribution of w_n+1
- w_n + d
for d = 100 ms. The
correlation goes down
dramatically. If d is
small then you have too many probes and they can affect the traffic.
Conclusion:
In the paper authors have shown that UDP echo tool is useful to analyze
the end-to-end characteristics
of the Internet over different time scales.