Dynamic Window-Constrained Scheduling

[About DWCS ][How DWCS Works ][DWCS Features ][Download ]
[Installation ][Using DWCS ][To Do ][Related Papers ][Related Web Sites ]

Change Log


What is DWCS?

Background. DWCS stands for Dynamic Window-Constrained Scheduling. Originally, DWCS was designed to be a network packet scheduler, that limited the number of late or lost packets over finite numbers of consecutive packets in loss-tolerant and/or delay-constrained , heterogeneous traffic streams. For example, virtual environments, tele-medicine, and real-time multimedia applications (including  video-on-demand and streamed audio)  can tolerate a certain fraction  of late or lost information, without any noticeable degradation in the quality of playback at the receiver.  However, it is important that such applications do  not suffer losses at certain points in time, even though they can tolerate a certain fraction of information being lost. For most loss-tolerant applications, there is usually a restriction on the number of  consecutive packet losses that are acceptable. For example, losing a series of consecutive packets from an audio stream might result in the loss of a complete section of audio, rather than merely a reduction in the signal-to-noise ratio. A suitable performance measure in this case is a windowed loss-rate , i.e. loss-rate constrained over a finite range, or window, of consecutive packets. More precisely, an application might tolerate x packet losses for every y arrivals at the various service points across a network. Any service discipline attempting to meet these requirements must ensure that the number of violations to the loss-tolerance specification is minimized (if not zero) across the whole stream. As a packet scheduler, DWCS attempts to meet windowed loss-rates for multiple packet streams, each with their own performance objectives.

More recently, we have applied DWCS to scheduling processes (and threads) for execution on available CPUs of a host machine. In its current incarnation as a CPU scheduler, DWCS can guarantee that no more than x deadlines are missed every y deadlines, for a given process (or thread).

Real-Time CPU Scheduling. Many scheduling policies have been used for real-time scheduling, including  cyclic executives, static priority algorithms (such as Rate-Monotonic Scheduling), and dynamic algorithms, such as Earliest Deadline First (EDF). Static priority algorithms all consider that one process (or thread)  is more important to service than another process, based solely on each process's time-invariant priority. However, such algorithms do not take into account the importance of servicing a process can vary with time. Moreover, real-time  systems require processes to complete their service by specific deadlines, in order to meet necessary timeliness constraints. Consequently, Earliest Deadline First scheduling  (EDF) considers that each process's importance (or priority) increases as the urgency of completing that process's service increases.  That is, it is more important to service a process closer to its deadline than another process further from its deadline. Although EDF considers a process's importance varies with time, it does not consider that at any point in time, the individual importance of two or more processes with the same deadline may be different.  That is, if two processes have the same deadline, one process may require precedence over another, as in static priority schemes. Furthermore, EDF assumes that the urgency of servicing each process increases at the same rate for all processes. This is not true in all systems. Hence, there is a need for an algorithm to combine both static priority and earliest deadline first properties. DWCS is one such algorithm that combines both these properties. In fact, DWCS can perform static priority, earliest deadline first,  and real-time proportional (or "fair") scheduling.

Real-Time Scheduling with DWCS

Background. The original DWCS packet scheduler required two service constraints to be specified for each packet stream. These were a maximum inter-packet gap (from which a per-packet deadline could be derived) and a loss-tolerance. For CPU scheduling, these terms were not so meaningful, so we changed the inter-packet gap to be a request period and the loss-tolerance to be a window-constraint.

For CPU scheduling, a process has attributes in terms of a request period and a window-constraint .  A request period defines an interval (window of time) over which a process must receive some share of the CPU. The end of one request period and the start of the next, for any given process, denotes a deadline. Hence, DWCS derives  deadlines for each process, such that a process's current deadline  is at the end of its current request period.  The window-constraint is actually a fraction, x/y , and states that a process can miss being serviced at most x request periods out of every y request periods. That is, a process can miss x deadlines for every fixed window of y deadlines that are derived from its request period and current time.  Observe that a process does NOT have to be a periodic process to use DWCS. For periodic processes, a request period can identify a window of time over which one instance of a process must executes to completion, before the next request for the same process requires servicing. For aperiodic processes (for example, processes that only execute once) , a request period identifies a window of time in which part of a process must execute.  That is, for an aperiodic process, P, the current request period identifies the interval of time in which one timeslice (or service quantum) of  P must execute, otherwise the current timeslice of P will miss its deadline. Consequently, the window-constraint of P specifies that up to x timeslices of P can be serviced late (outside their request periods) for each consecutive y timeslices.  Clearly, a process, P , has a number of timeslices, dependent upon its overall service time.  With DWCS, the size of each timeslice can be specified as some multiple of the system clock. For Intel platforms, the minimum timeslice for a process is 10mS , with larger timeslices being some multiple of 10mS, but this will change if we use the UTIME package from Kansas University.  Moreover,  each timeslice for a process (be it the time to service a full instance of a process, or simply part of that process) can be preemptible in a given request period.

Example. A process requires 50mS of service every request period of 100mS, tolerating 2 out of 10 missed service intervals. If timeslice preemption is enabled,  that 50mS can be divided into 10mS units of CPU time, as long as a total of 50mS of CPU time is granted to this process every 100mS (with the exception of two intervals of 100mS in which the process can miss being serviced altogether). This process is said to have a window-constraint of 2/10, which implies that over every window of 10*100mS, the process can tolerate not being serviced for 50mS in two of its 100mS request periods. The minimum utilization for this process is:

U_min = (1 - x / y) * C / T, where C = 50mS of service time, T = 100mS for the request period, and x / y = 2/10 for the window-constraint.

That is, U_min = 0.4 (or 40%)

For servicing at the granularity of the smallest (10mS) timeslice, we can consider the service constraints are:

C' = C / 5 = K (i.e., 10mS), T' = T / 5, and x' / y' = x / y.

Then, U_min' = (1 - x' / y') * K / T' = 8/10 * 10mS / 20 mS = 0.4 (or 40%)

How DWCS Works

DWCS is a relatively simple algorithm. It compares pairs of processes (or, equivalently, threads, or network-bound packets) and selects the highest priority process for service based on the following table of precedence rules:
Pairwise Ordering Amongst Processes
Earliest deadline first (EDF)
Equal deadlines, order lowest window-constraint first
Equal deadlines and zero window-constraints, order highest window-denominator first
Equal deadlines and equal non-zero window-constraints, order lowest window-numerator first
All other cases: first-come-first-serve

Observe that the above table of precedence rules is not always optimal. That is, there are cases when the minimum required utilization of a set of processes to meet all window-constraints is less than 100% (meaning resources are available) but some processes will violate their window-constraints when serviced according to the above precedence rules. To deal with this case, each process' service constraints must be converted to their canonical form which yield equivalent minimum utilization demands, albeit over finer-grained windows of time. In most cases, this is actually preferable.

Canonical Service Constraints

To guarantee a feasible schedule, whereby all process' window-constraints are met as long as the total minimum utilization is less than 100%, original service constraints must be converted to their canonical form. This is because sub-optimal process schedules may result using a pairwise ordering method that considers deadlines before window-constraints (or vice versa).

For any process, Pi, with service constraints, (Ci, Ti , xi / yi), representing service time, request period and window-constraints, respectively, we can perform an O(1) conversion to the canonical service constraints, (Ci', Ti', xi' / yi') as follows:

Ci' = Ci
Ti' = K
xi' = yi * (qi - 1) + xi
yi' = qi * yi

Assuming Ti = qi * K, Ci <= K for some constant K and positive integer, qi. This assumption is based on the notion that the system performs scheduling at the granularity of fixed-sized time-slots, that are K time units in length. In effect, this limits a timeslice to a maximum of K time units also.

Observe that 1 <= i <= n, if there are n processes, P1...Pn. Moreover, if:

sum (for all i) (1 - xi / yi) * Ci / Ti <= 1.0 then the minimum total utilization of the canonical constraints is no more than:

sum (for all i) (1 - xi' / yi') * K / Ti' <= 1.0 = U_min_total

The canonical service constraints ensure that no process violates its original window-constraints of xi / yi if U_min_total <= 1.0 .

Dynamic Service Constraint Adjustment

For each process serviced for at least one timeslice before its deadline (i.e. a process that is serviced in its current request period), the window-constraint x/y is adjusted dynamically as follows, to, in effect, reduce the urgency of servicing the same process again when another more "critical" process requires service:

Let  x be a process's original window-numerator, while y is its original window denominator. These are the values first set for a DWCS process. Let x' and y' denote the current window numerator and current window denominator as the process is executed. Then, if a process is serviced before its deadline:


if (y' > x') then y' = y' - 1;
else if (y' == x') and (x' > 0) then x' = x' - 1; y' = y' - 1;
if (x' == y' == 0) or (the process is tagged with a violation. i.e. DWCS_VIOLATION is set) then
x' = x;
y' = y;
if (the process is tagged with a violation) then
reset DWCS_VIOLATION flag;

For any process not serviced by its deadline (i.e. its next timeslice has not been serviced in its latest request period) then:


if (x' > 0) then
x' = x' - 1; y' = y' -1;
if (x' == y' == 0) then
x' = x;
y' = y;
else if (x'  == 0) and (y > 0) then
y' = y' + 1;
Tag process with a violation by setting its DWCS_VIOLATION flag;

The pseudo-code for DWCS is as follows. It's actually more complicated than this but this will give you a basic understanding of what is happening to your processes:
Let Pi   = ith process
      di   = deadline of Pi
      Ti   = request period of Pi
      Wi' = current window-constraint of Pi

while (TRUE) {
for (each process whose ready time <= current time)
find process, Pi, with the highest priority, according to the rules in the Table above;
service  next timeslice of Pi (which may mean servicing all of Pi);
adjust Wi' according to the rules in (A), above;
di = di + Ti;
for (each process, Pj, missing its deadline) {
while (deadline missed) {
adjust Wj' according to the rules in (B), above;
dj = dj + Tj;
  • Observe that a process with an original window-constraint of 0/0 actually forces the scheduler to treat it as a deadline-constrained process, whose deadline is calculated from the current time and the request period.
  • Observe also (from the table), that deadlines are used as the  primary means of deciding whether to service one process or another, every time the scheduler runs. In practice, you might think very few deadlines are actually equal, so why do we need all these extra decision rules? Well, in practice, it is unlikely that you would want to context switch from one process to another too frequently, because (a) the advantages of caching recently used instructions and data in the processor's cache are defeated, and (b) the context-switching overhead is too great. Therefore, most processes will probably want to run for longer (non-preemptive) periods of time. If this is the case, it makes sense to round a process's deadline to the nearest possible preemption point that is actually  less than or equal to the real deadline. In doing this, chances are that more deadlines will be equal as the size of a non-preemptive timeslice is increased.
  • The process can still be set to execute in entirety, or in part, depending on the size of a process's timeslice. (See sched_setscheduler , below, for details on how to adjust the size of a process's timeslice. This actually involves setting the service_time field of struct sched_param ).
  • Below is a simple example of DWCS scheduling three processes,  P1, P2, and P3, each with a timeslice of 1 time unit (which may well be 10mS or 10 seconds in reality). Each process has a request period of 1 time unit and the original window-constraints are 1/2, 3/4, and 6/8. Since each process has a request period of 1 time unit, each process has a new deadline every time unit also. In this example, let P1 comprise of 8 timeslices, while P2 and P3 each comprise of 4 timeslices. Below the time axis is a list of current window-constraints and deadlines for each process. Deadlines are shown in brackets. Over the full 16 time units, P1 receives 8 time units of service, while P2 and P3 both receive 4 time units of service. Moreover,  P1 is serviced once every 2 time units, while P2 and P3 are serviced once every 4 time units. Therefore, P1 has not missed more than one deadline every two deadlines, and P2 and P3 have not missed more than 3 deadlines every window of 4 deadlines. In fact, P3 has weaker window-constraints, in that it requires that no more than 6 deadlines are missed every 8 consecutive deadlines (or, equivalently, 8 consecutive request periods).  As it turns out, over the smallest possible window of time (4 time units), P1 receives twice has much service as P2 and P3. Hence, real-time proportional (or "fair") service is granted to all processes in proportion to their original window-constraints and request periods.

  • DWCS Scheduling Modes and Features

    Where can I get DWCS?

    Installation Notes

    1. Download a copy of the tar file and unpack its contents by typing:
    gzip -dc linux_2.2.*_dwcs_v1.*.tar.gz | tar xvf - (where * is replaced with the appropriate version number)
    2. Read the INSTALL file in the newly-created dwcs directory.

    Using DWCS

    To Do

    Selected Papers

    • Yuting Zhang, Richard West and Xin Qi, "A Virtual Deadline Scheduler for Window-Constrained Service Guarantees", in Proceedings of the 25th IEEE Real-Time Systems Symposium (RTSS), December 2004
    • Yuting Zhang and Richard West, "End-to-end Window-Constrained Scheduling for Real-Time Communication", in Proceedings of the 10th International Conference on Real-Time and Embedded Computing Systems and Applications (RTCSA'04), August 2004
    • Yuting Zhang, Richard West and Xin Qi, "A Virtual Deadline Scheduler for Window-Constrained Service Guarantees", Technical Report, 2004-013, Boston University, March 2004
    • Richard West, Yuting Zhang, Karsten Schwan and Christian Poellabauer, "Dynamic Window-Constrained Scheduling of Real-Time Streams in Media Servers", IEEE Transactions on Computers, Volume 53, Number 6, pp. 744-759, June 2004
    • Richard West, Ivan Ganev and Karsten Schwan, "Window-Constrained Process Scheduling for Linux Systems", in Proceedings of the 3rd Real-Time Linux Workshop, Milan, Italy, November 2001
    • Richard West and Christian Poellabauer, "Analysis of a Window-Constrained Scheduler for Real-Time and Best-Effort Packet Streams" , in Proceedings of the 21st IEEE Real-Time Systems Symposium (RTSS), 2000

    • Richard West, Karsten Schwan and Christian Poellabauer, "Scalable Scheduling Support for Loss and Delay Constrained Media Streams", in Proceedings of the 5th IEEE Real-Time Technology and Applications Symposium (RTAS), 1999

    • Richard West and Karsten Schwan, "Dynamic Window-Constrained Scheduling for Multimedia Applications" , in Proceedings of the IEEE International Conference on Multimedia Computing and Systems (ICMCS), 1999


    • A Virtual Deadline Scheduler for Window-Constrained Service Guarantees, in Proceedings of the 25th IEEE Real-Time Systems Symposium (RTSS), December 2004
    • End-to-end Window-Constrained Scheduling for Real-Time Communication, in Proceedings of the 10th International Conference on Real-Time and Embedded Computing Systems and Applications (RTCSA'04), August 2004
    • Window-Constrained Process Scheduling for Linux Systems , Third Real-Time Linux Workshop, Milan, Italy, November 2001
    • Analysis of a Window-Constrained Scheduler for Real-Time and Best-Effort Packet Streams, 21st IEEE Real-Time Systems Symposium, 2000
    • Scalable Scheduling Support for Loss and Delay Constrained Media Streams, Fifth IEEE Real-Time Technology and Applications Symposium, 1999
    • Dynamic Window-Constrained Scheduling for Multimedia Applications , IEEE International Conference on Multimedia Computing and Systems, 1999

    Related Web Sites

    This page is maintained by Rich West (richwest@cs.bu.edu )