CHAPTER 3 (part 1)
|
|
Scheduling Aperiodic Tasks
|
Introductionrate monotonic and the deadline driven scheduling were designed to guarantee the deadlines of periodic tasks. A more realistic environment though, is one where both periodic and aperiodic tasks co-exist. Normally, in such an environment, the periodic tasks have hard deadlines and are used to serve instruments and/or control loops while the aperiodic tasks have soft deadlines, and are dedicated to user interaction. Different criteria must be used in order to schedule such a tasking environment. While the rate-monotonic assignment or deadline driven scheduling succeed in the case of purely periodic and hard deadline environments, the uncertainty regarding the appearance of requests by the aperiodic tasks, makes these scheduling disciplines inapplicable in the presence of aperiodic tasks. In addition, since the aperiodic tasks are primarily used for user interaction, one would like to optimize their response time in addition to meeting the hard deadlines of the periodic tasks. Two alternatives have been developed: Treat the aperiodic tasks in the background, filling in any processing gaps left by the set of periodic tasks. This approach guarantees the deadlines of the periodic tasks and maximizes the utilization factor of the processor but since it delegates the aperiodic tasks to the lowest possible priority, it favors long response times from these tasks. Where shorter response times are required, the polling method is used. In this method a server task wakes up periodically and executes any waiting aperiodic task. The main problem with this method is that because of the bursty nature of the aperiodic tasks, the server often misses them by one period, and it may not have enough capacity to support a large burst. The priority exchange and the deferrable server algorithms have been designed to overcome this time misalignment. Both algorithms utilize a periodic server, which is a task that handles the running of the aperiodic tasks. They differ as to when and how the periodic server is invoked, and how it (the periodic server) interacts with the aperiodic and periodic tasks. 3.1 The Priority Exchange AlgorithmThis algorithm assigns the periodic server the highest
priority i.e. the periodic server has a period The periodic server is assigned a certain run time C1 which it normally uses to service waiting aperiodic tasks. Of course the total utilization factor should follow the Liu and Leyland limit for the rate monotonic scheduling i.e. The scheduling of the periodic
server proceeds as follows: A caveat that we need to be aware of. The Priority Exchange Algorithm does not "bank" time. That is, the periodic server needs at least one waiting periodic task to exchange its priority with. If no periodic tasks are waiting, no exchange takes place, and the periodic server simply idles until the end of its run time, or until a periodic or aperiodic task arrives. 3.1.1 Conservation of Schedulability Theorem
3.1: For a set of m fixed priority
periodic tasks If one plots U against U1, one would obtain Figure 1: Least Upper Bounds of the processor Utilization for the Priority Exchange and the rate monotonic schedules. Thus the schedulability threshold, depends on the utilization factor of the highest priority task (i.e. the periodic server). And it is higher than the 70% lower bound given for periodic tasks. (Qualitatively speaking, the high priority server consumes
certain processor power. The remaining is distributed among the periodic
tasks for which the 70% Liu and Layland bound holds. The higher the U1
is, the better the overall utilization, but then 3.2 The Deferrable Server (DS) AlgorithmThe deferrable server is similar to the priority exchange in the sense that it allocates certain time for serving aperiodic tasks. In contrast to the PE. though, it does not trade priority with any lower priority periodic task, but it keeps its high priority (and hence the processor time) for the whole duration C1 of the execution time of the server. The DS algorithm achieves a lower utilization bound for the set of periodic tasks, but at the same time it is easier to implement as compared to the PE. algorithm and thus has a lower overhead. Also, the periodic server can start its execution any time within its period T1. Thus, it responds optimally to the arrival of aperiodic tasks. In addition, if any portion of C1 is left unused at the end of the repetition period T1, it is discarded. The remainder of the periodic tasks follow Liu and Layland's [1] arrangement. 3.2.1 Worst Case ModelSince the periodic server can start at any time within its period, the worst case is when two successive C1 `s are requested, and also when all the periodic tasks are lined up at the onset of the first request. We shall prove that the worst fully utilized values of Ci are given as First the schedule is feasible. Indeed To prove the above, we proceed as follows Since From equations (3.7) and (3.8), one obtains and Additionally, in a similar manner as the rate monotonic
case, this schedule fully utilizes the processor, and the run times are
chosen so as within one period there will be exactly two invocations of
all the higher priority tasks and three invocations of the deferrable
server. It is assumed that the start times are arranged so as to coincide
and to allow the deferrable server to execute twice. Consider for example
the first task Figure 3.2: Tasks and run times for a set of periodic tasks and the deferrable scheduler (t1) attaining the minimum utilization factor. We shall see that this set of tasks yields the worst processor
utilization. Focus in tasks Let therefore the run time of As it can be verified with the help of Figure 3.4 Denote by Then Figure 3.3: Perturbation
of the run time of the periodic server ( Figure
3.4: Perturbation of the run time of the deferable server ( Similarly, if the run time of Denote by Then Thus, as it can be seen from (3.9) and (3.10), any perturbation
to the length of the run time of the deferrable server from Perturbations to the lengths of the run times of the other tasks are handled in the same way as for the case of the deadline driven scheduling, and they also results in sets of tasks with higher utilization factors. |