1========================
2Deadline Task Scheduling
3========================
4
5.. CONTENTS
6
7    0. WARNING
8    1. Overview
9    2. Scheduling algorithm
10      2.1 Main algorithm
11      2.2 Bandwidth reclaiming
12    3. Scheduling Real-Time Tasks
13      3.1 Definitions
14      3.2 Schedulability Analysis for Uniprocessor Systems
15      3.3 Schedulability Analysis for Multiprocessor Systems
16      3.4 Relationship with SCHED_DEADLINE Parameters
17    4. Bandwidth management
18      4.1 System-wide settings
19      4.2 Task interface
20      4.3 Default behavior
21      4.4 Behavior of sched_yield()
22    5. Tasks CPU affinity
23      5.1 Using cgroup v1 cpuset controller
24      5.2 Using cgroup v2 cpuset controller
25    6. Future plans
26    A. Test suite
27    B. Minimal main()
28
29
300. WARNING
31==========
32
33 Fiddling with these settings can result in an unpredictable or even unstable
34 system behavior. As for -rt (group) scheduling, it is assumed that root users
35 know what they're doing.
36
37
381. Overview
39===========
40
41 The SCHED_DEADLINE policy contained inside the sched_dl scheduling class is
42 basically an implementation of the Earliest Deadline First (EDF) scheduling
43 algorithm, augmented with a mechanism (called Constant Bandwidth Server, CBS)
44 that makes it possible to isolate the behavior of tasks between each other.
45
46
472. Scheduling algorithm
48=======================
49
502.1 Main algorithm
51------------------
52
53 SCHED_DEADLINE [18] uses three parameters, named "runtime", "period", and
54 "deadline", to schedule tasks. A SCHED_DEADLINE task should receive
55 "runtime" microseconds of execution time every "period" microseconds, and
56 these "runtime" microseconds are available within "deadline" microseconds
57 from the beginning of the period.  In order to implement this behavior,
58 every time the task wakes up, the scheduler computes a "scheduling deadline"
59 consistent with the guarantee (using the CBS[2,3] algorithm). Tasks are then
60 scheduled using EDF[1] on these scheduling deadlines (the task with the
61 earliest scheduling deadline is selected for execution). Notice that the
62 task actually receives "runtime" time units within "deadline" if a proper
63 "admission control" strategy (see Section "4. Bandwidth management") is used
64 (clearly, if the system is overloaded this guarantee cannot be respected).
65
66 Summing up, the CBS[2,3] algorithm assigns scheduling deadlines to tasks so
67 that each task runs for at most its runtime every period, avoiding any
68 interference between different tasks (bandwidth isolation), while the EDF[1]
69 algorithm selects the task with the earliest scheduling deadline as the one
70 to be executed next. Thanks to this feature, tasks that do not strictly comply
71 with the "traditional" real-time task model (see Section 3) can effectively
72 use the new policy.
73
74 In more details, the CBS algorithm assigns scheduling deadlines to
75 tasks in the following way:
76
77  - Each SCHED_DEADLINE task is characterized by the "runtime",
78    "deadline", and "period" parameters;
79
80  - The state of the task is described by a "scheduling deadline", and
81    a "remaining runtime". These two parameters are initially set to 0;
82
83  - When a SCHED_DEADLINE task wakes up (becomes ready for execution),
84    the scheduler checks if::
85
86                 remaining runtime                  runtime
87        ----------------------------------    >    ---------
88        scheduling deadline - current time           period
89
90    then, if the scheduling deadline is smaller than the current time, or
91    this condition is verified, the scheduling deadline and the
92    remaining runtime are re-initialized as
93
94         scheduling deadline = current time + deadline
95         remaining runtime = runtime
96
97    otherwise, the scheduling deadline and the remaining runtime are
98    left unchanged;
99
100  - When a SCHED_DEADLINE task executes for an amount of time t, its
101    remaining runtime is decreased as::
102
103         remaining runtime = remaining runtime - t
104
105    (technically, the runtime is decreased at every tick, or when the
106    task is descheduled / preempted);
107
108  - When the remaining runtime becomes less or equal than 0, the task is
109    said to be "throttled" (also known as "depleted" in real-time literature)
110    and cannot be scheduled until its scheduling deadline. The "replenishment
111    time" for this task (see next item) is set to be equal to the current
112    value of the scheduling deadline;
113
114  - When the current time is equal to the replenishment time of a
115    throttled task, the scheduling deadline and the remaining runtime are
116    updated as::
117
118         scheduling deadline = scheduling deadline + period
119         remaining runtime = remaining runtime + runtime
120
121 The SCHED_FLAG_DL_OVERRUN flag in sched_attr's sched_flags field allows a task
122 to get informed about runtime overruns through the delivery of SIGXCPU
123 signals.
124
125
1262.2 Bandwidth reclaiming
127------------------------
128
129 Bandwidth reclaiming for deadline tasks is based on the GRUB (Greedy
130 Reclamation of Unused Bandwidth) algorithm [15, 16, 17] and it is enabled
131 when flag SCHED_FLAG_RECLAIM is set.
132
133 The following diagram illustrates the state names for tasks handled by GRUB::
134
135                             ------------
136                 (d)        |   Active   |
137              ------------->|            |
138              |             | Contending |
139              |              ------------
140              |                A      |
141          ----------           |      |
142         |          |          |      |
143         | Inactive |          |(b)   | (a)
144         |          |          |      |
145          ----------           |      |
146              A                |      V
147              |              ------------
148              |             |   Active   |
149              --------------|     Non    |
150                 (c)        | Contending |
151                             ------------
152
153 A task can be in one of the following states:
154
155  - ActiveContending: if it is ready for execution (or executing);
156
157  - ActiveNonContending: if it just blocked and has not yet surpassed the 0-lag
158    time;
159
160  - Inactive: if it is blocked and has surpassed the 0-lag time.
161
162 State transitions:
163
164  (a) When a task blocks, it does not become immediately inactive since its
165      bandwidth cannot be immediately reclaimed without breaking the
166      real-time guarantees. It therefore enters a transitional state called
167      ActiveNonContending. The scheduler arms the "inactive timer" to fire at
168      the 0-lag time, when the task's bandwidth can be reclaimed without
169      breaking the real-time guarantees.
170
171      The 0-lag time for a task entering the ActiveNonContending state is
172      computed as::
173
174                        (runtime * dl_period)
175             deadline - ---------------------
176                             dl_runtime
177
178      where runtime is the remaining runtime, while dl_runtime and dl_period
179      are the reservation parameters.
180
181  (b) If the task wakes up before the inactive timer fires, the task re-enters
182      the ActiveContending state and the "inactive timer" is canceled.
183      In addition, if the task wakes up on a different runqueue, then
184      the task's utilization must be removed from the previous runqueue's active
185      utilization and must be added to the new runqueue's active utilization.
186      In order to avoid races between a task waking up on a runqueue while the
187      "inactive timer" is running on a different CPU, the "dl_non_contending"
188      flag is used to indicate that a task is not on a runqueue but is active
189      (so, the flag is set when the task blocks and is cleared when the
190      "inactive timer" fires or when the task  wakes up).
191
192  (c) When the "inactive timer" fires, the task enters the Inactive state and
193      its utilization is removed from the runqueue's active utilization.
194
195  (d) When an inactive task wakes up, it enters the ActiveContending state and
196      its utilization is added to the active utilization of the runqueue where
197      it has been enqueued.
198
199 For each runqueue, the algorithm GRUB keeps track of two different bandwidths:
200
201  - Active bandwidth (running_bw): this is the sum of the bandwidths of all
202    tasks in active state (i.e., ActiveContending or ActiveNonContending);
203
204  - Total bandwidth (this_bw): this is the sum of all tasks "belonging" to the
205    runqueue, including the tasks in Inactive state.
206
207  - Maximum usable bandwidth (max_bw): This is the maximum bandwidth usable by
208    deadline tasks and is currently set to the RT capacity.
209
210
211 The algorithm reclaims the bandwidth of the tasks in Inactive state.
212 It does so by decrementing the runtime of the executing task Ti at a pace equal
213 to
214
215           dq = -(max{ Ui, (Umax - Uinact - Uextra) } / Umax) dt
216
217 where:
218
219  - Ui is the bandwidth of task Ti;
220  - Umax is the maximum reclaimable utilization (subjected to RT throttling
221    limits);
222  - Uinact is the (per runqueue) inactive utilization, computed as
223    (this_bq - running_bw);
224  - Uextra is the (per runqueue) extra reclaimable utilization
225    (subjected to RT throttling limits).
226
227
228 Let's now see a trivial example of two deadline tasks with runtime equal
229 to 4 and period equal to 8 (i.e., bandwidth equal to 0.5)::
230
231         A            Task T1
232         |
233         |                               |
234         |                               |
235         |--------                       |----
236         |       |                       V
237         |---|---|---|---|---|---|---|---|--------->t
238         0   1   2   3   4   5   6   7   8
239
240
241         A            Task T2
242         |
243         |                               |
244         |                               |
245         |       ------------------------|
246         |       |                       V
247         |---|---|---|---|---|---|---|---|--------->t
248         0   1   2   3   4   5   6   7   8
249
250
251         A            running_bw
252         |
253       1 -----------------               ------
254         |               |               |
255      0.5-               -----------------
256         |                               |
257         |---|---|---|---|---|---|---|---|--------->t
258         0   1   2   3   4   5   6   7   8
259
260
261  - Time t = 0:
262
263    Both tasks are ready for execution and therefore in ActiveContending state.
264    Suppose Task T1 is the first task to start execution.
265    Since there are no inactive tasks, its runtime is decreased as dq = -1 dt.
266
267  - Time t = 2:
268
269    Suppose that task T1 blocks
270    Task T1 therefore enters the ActiveNonContending state. Since its remaining
271    runtime is equal to 2, its 0-lag time is equal to t = 4.
272    Task T2 start execution, with runtime still decreased as dq = -1 dt since
273    there are no inactive tasks.
274
275  - Time t = 4:
276
277    This is the 0-lag time for Task T1. Since it didn't woken up in the
278    meantime, it enters the Inactive state. Its bandwidth is removed from
279    running_bw.
280    Task T2 continues its execution. However, its runtime is now decreased as
281    dq = - 0.5 dt because Uinact = 0.5.
282    Task T2 therefore reclaims the bandwidth unused by Task T1.
283
284  - Time t = 8:
285
286    Task T1 wakes up. It enters the ActiveContending state again, and the
287    running_bw is incremented.
288
289
2902.3 Energy-aware scheduling
291---------------------------
292
293 When cpufreq's schedutil governor is selected, SCHED_DEADLINE implements the
294 GRUB-PA [19] algorithm, reducing the CPU operating frequency to the minimum
295 value that still allows to meet the deadlines. This behavior is currently
296 implemented only for ARM architectures.
297
298 A particular care must be taken in case the time needed for changing frequency
299 is of the same order of magnitude of the reservation period. In such cases,
300 setting a fixed CPU frequency results in a lower amount of deadline misses.
301
302
3033. Scheduling Real-Time Tasks
304=============================
305
306
307
308 ..  BIG FAT WARNING ******************************************************
309
310 .. warning::
311
312   This section contains a (not-thorough) summary on classical deadline
313   scheduling theory, and how it applies to SCHED_DEADLINE.
314   The reader can "safely" skip to Section 4 if only interested in seeing
315   how the scheduling policy can be used. Anyway, we strongly recommend
316   to come back here and continue reading (once the urge for testing is
317   satisfied :P) to be sure of fully understanding all technical details.
318
319 .. ************************************************************************
320
321 There are no limitations on what kind of task can exploit this new
322 scheduling discipline, even if it must be said that it is particularly
323 suited for periodic or sporadic real-time tasks that need guarantees on their
324 timing behavior, e.g., multimedia, streaming, control applications, etc.
325
3263.1 Definitions
327------------------------
328
329 A typical real-time task is composed of a repetition of computation phases
330 (task instances, or jobs) which are activated on a periodic or sporadic
331 fashion.
332 Each job J_j (where J_j is the j^th job of the task) is characterized by an
333 arrival time r_j (the time when the job starts), an amount of computation
334 time c_j needed to finish the job, and a job absolute deadline d_j, which
335 is the time within which the job should be finished. The maximum execution
336 time max{c_j} is called "Worst Case Execution Time" (WCET) for the task.
337 A real-time task can be periodic with period P if r_{j+1} = r_j + P, or
338 sporadic with minimum inter-arrival time P is r_{j+1} >= r_j + P. Finally,
339 d_j = r_j + D, where D is the task's relative deadline.
340 Summing up, a real-time task can be described as
341
342	Task = (WCET, D, P)
343
344 The utilization of a real-time task is defined as the ratio between its
345 WCET and its period (or minimum inter-arrival time), and represents
346 the fraction of CPU time needed to execute the task.
347
348 If the total utilization U=sum(WCET_i/P_i) is larger than M (with M equal
349 to the number of CPUs), then the scheduler is unable to respect all the
350 deadlines.
351 Note that total utilization is defined as the sum of the utilizations
352 WCET_i/P_i over all the real-time tasks in the system. When considering
353 multiple real-time tasks, the parameters of the i-th task are indicated
354 with the "_i" suffix.
355 Moreover, if the total utilization is larger than M, then we risk starving
356 non- real-time tasks by real-time tasks.
357 If, instead, the total utilization is smaller than M, then non real-time
358 tasks will not be starved and the system might be able to respect all the
359 deadlines.
360 As a matter of fact, in this case it is possible to provide an upper bound
361 for tardiness (defined as the maximum between 0 and the difference
362 between the finishing time of a job and its absolute deadline).
363 More precisely, it can be proven that using a global EDF scheduler the
364 maximum tardiness of each task is smaller or equal than
365
366	((M − 1) · WCET_max − WCET_min)/(M − (M − 2) · U_max) + WCET_max
367
368 where WCET_max = max{WCET_i} is the maximum WCET, WCET_min=min{WCET_i}
369 is the minimum WCET, and U_max = max{WCET_i/P_i} is the maximum
370 utilization[12].
371
3723.2 Schedulability Analysis for Uniprocessor Systems
373----------------------------------------------------
374
375 If M=1 (uniprocessor system), or in case of partitioned scheduling (each
376 real-time task is statically assigned to one and only one CPU), it is
377 possible to formally check if all the deadlines are respected.
378 If D_i = P_i for all tasks, then EDF is able to respect all the deadlines
379 of all the tasks executing on a CPU if and only if the total utilization
380 of the tasks running on such a CPU is smaller or equal than 1.
381 If D_i != P_i for some task, then it is possible to define the density of
382 a task as WCET_i/min{D_i,P_i}, and EDF is able to respect all the deadlines
383 of all the tasks running on a CPU if the sum of the densities of the tasks
384 running on such a CPU is smaller or equal than 1:
385
386	sum(WCET_i / min{D_i, P_i}) <= 1
387
388 It is important to notice that this condition is only sufficient, and not
389 necessary: there are task sets that are schedulable, but do not respect the
390 condition. For example, consider the task set {Task_1,Task_2} composed by
391 Task_1=(50ms,50ms,100ms) and Task_2=(10ms,100ms,100ms).
392 EDF is clearly able to schedule the two tasks without missing any deadline
393 (Task_1 is scheduled as soon as it is released, and finishes just in time
394 to respect its deadline; Task_2 is scheduled immediately after Task_1, hence
395 its response time cannot be larger than 50ms + 10ms = 60ms) even if
396
397	50 / min{50,100} + 10 / min{100, 100} = 50 / 50 + 10 / 100 = 1.1
398
399 Of course it is possible to test the exact schedulability of tasks with
400 D_i != P_i (checking a condition that is both sufficient and necessary),
401 but this cannot be done by comparing the total utilization or density with
402 a constant. Instead, the so called "processor demand" approach can be used,
403 computing the total amount of CPU time h(t) needed by all the tasks to
404 respect all of their deadlines in a time interval of size t, and comparing
405 such a time with the interval size t. If h(t) is smaller than t (that is,
406 the amount of time needed by the tasks in a time interval of size t is
407 smaller than the size of the interval) for all the possible values of t, then
408 EDF is able to schedule the tasks respecting all of their deadlines. Since
409 performing this check for all possible values of t is impossible, it has been
410 proven[4,5,6] that it is sufficient to perform the test for values of t
411 between 0 and a maximum value L. The cited papers contain all of the
412 mathematical details and explain how to compute h(t) and L.
413 In any case, this kind of analysis is too complex as well as too
414 time-consuming to be performed on-line. Hence, as explained in Section
415 4 Linux uses an admission test based on the tasks' utilizations.
416
4173.3 Schedulability Analysis for Multiprocessor Systems
418------------------------------------------------------
419
420 On multiprocessor systems with global EDF scheduling (non partitioned
421 systems), a sufficient test for schedulability can not be based on the
422 utilizations or densities: it can be shown that even if D_i = P_i task
423 sets with utilizations slightly larger than 1 can miss deadlines regardless
424 of the number of CPUs.
425
426 Consider a set {Task_1,...Task_{M+1}} of M+1 tasks on a system with M
427 CPUs, with the first task Task_1=(P,P,P) having period, relative deadline
428 and WCET equal to P. The remaining M tasks Task_i=(e,P-1,P-1) have an
429 arbitrarily small worst case execution time (indicated as "e" here) and a
430 period smaller than the one of the first task. Hence, if all the tasks
431 activate at the same time t, global EDF schedules these M tasks first
432 (because their absolute deadlines are equal to t + P - 1, hence they are
433 smaller than the absolute deadline of Task_1, which is t + P). As a
434 result, Task_1 can be scheduled only at time t + e, and will finish at
435 time t + e + P, after its absolute deadline. The total utilization of the
436 task set is U = M · e / (P - 1) + P / P = M · e / (P - 1) + 1, and for small
437 values of e this can become very close to 1. This is known as "Dhall's
438 effect"[7]. Note: the example in the original paper by Dhall has been
439 slightly simplified here (for example, Dhall more correctly computed
440 lim_{e->0}U).
441
442 More complex schedulability tests for global EDF have been developed in
443 real-time literature[8,9], but they are not based on a simple comparison
444 between total utilization (or density) and a fixed constant. If all tasks
445 have D_i = P_i, a sufficient schedulability condition can be expressed in
446 a simple way:
447
448	sum(WCET_i / P_i) <= M - (M - 1) · U_max
449
450 where U_max = max{WCET_i / P_i}[10]. Notice that for U_max = 1,
451 M - (M - 1) · U_max becomes M - M + 1 = 1 and this schedulability condition
452 just confirms the Dhall's effect. A more complete survey of the literature
453 about schedulability tests for multi-processor real-time scheduling can be
454 found in [11].
455
456 As seen, enforcing that the total utilization is smaller than M does not
457 guarantee that global EDF schedules the tasks without missing any deadline
458 (in other words, global EDF is not an optimal scheduling algorithm). However,
459 a total utilization smaller than M is enough to guarantee that non real-time
460 tasks are not starved and that the tardiness of real-time tasks has an upper
461 bound[12] (as previously noted). Different bounds on the maximum tardiness
462 experienced by real-time tasks have been developed in various papers[13,14],
463 but the theoretical result that is important for SCHED_DEADLINE is that if
464 the total utilization is smaller or equal than M then the response times of
465 the tasks are limited.
466
4673.4 Relationship with SCHED_DEADLINE Parameters
468-----------------------------------------------
469
470 Finally, it is important to understand the relationship between the
471 SCHED_DEADLINE scheduling parameters described in Section 2 (runtime,
472 deadline and period) and the real-time task parameters (WCET, D, P)
473 described in this section. Note that the tasks' temporal constraints are
474 represented by its absolute deadlines d_j = r_j + D described above, while
475 SCHED_DEADLINE schedules the tasks according to scheduling deadlines (see
476 Section 2).
477 If an admission test is used to guarantee that the scheduling deadlines
478 are respected, then SCHED_DEADLINE can be used to schedule real-time tasks
479 guaranteeing that all the jobs' deadlines of a task are respected.
480 In order to do this, a task must be scheduled by setting:
481
482  - runtime >= WCET
483  - deadline = D
484  - period <= P
485
486 IOW, if runtime >= WCET and if period is <= P, then the scheduling deadlines
487 and the absolute deadlines (d_j) coincide, so a proper admission control
488 allows to respect the jobs' absolute deadlines for this task (this is what is
489 called "hard schedulability property" and is an extension of Lemma 1 of [2]).
490 Notice that if runtime > deadline the admission control will surely reject
491 this task, as it is not possible to respect its temporal constraints.
492
493 References:
494
495  1 - C. L. Liu and J. W. Layland. Scheduling algorithms for multiprogram-
496      ming in a hard-real-time environment. Journal of the Association for
497      Computing Machinery, 20(1), 1973.
498  2 - L. Abeni , G. Buttazzo. Integrating Multimedia Applications in Hard
499      Real-Time Systems. Proceedings of the 19th IEEE Real-time Systems
500      Symposium, 1998. http://retis.sssup.it/~giorgio/paps/1998/rtss98-cbs.pdf
501  3 - L. Abeni. Server Mechanisms for Multimedia Applications. ReTiS Lab
502      Technical Report. http://disi.unitn.it/~abeni/tr-98-01.pdf
503  4 - J. Y. Leung and M.L. Merril. A Note on Preemptive Scheduling of
504      Periodic, Real-Time Tasks. Information Processing Letters, vol. 11,
505      no. 3, pp. 115-118, 1980.
506  5 - S. K. Baruah, A. K. Mok and L. E. Rosier. Preemptively Scheduling
507      Hard-Real-Time Sporadic Tasks on One Processor. Proceedings of the
508      11th IEEE Real-time Systems Symposium, 1990.
509  6 - S. K. Baruah, L. E. Rosier and R. R. Howell. Algorithms and Complexity
510      Concerning the Preemptive Scheduling of Periodic Real-Time tasks on
511      One Processor. Real-Time Systems Journal, vol. 4, no. 2, pp 301-324,
512      1990.
513  7 - S. J. Dhall and C. L. Liu. On a real-time scheduling problem. Operations
514      research, vol. 26, no. 1, pp 127-140, 1978.
515  8 - T. Baker. Multiprocessor EDF and Deadline Monotonic Schedulability
516      Analysis. Proceedings of the 24th IEEE Real-Time Systems Symposium, 2003.
517  9 - T. Baker. An Analysis of EDF Schedulability on a Multiprocessor.
518      IEEE Transactions on Parallel and Distributed Systems, vol. 16, no. 8,
519      pp 760-768, 2005.
520  10 - J. Goossens, S. Funk and S. Baruah, Priority-Driven Scheduling of
521       Periodic Task Systems on Multiprocessors. Real-Time Systems Journal,
522       vol. 25, no. 2–3, pp. 187–205, 2003.
523  11 - R. Davis and A. Burns. A Survey of Hard Real-Time Scheduling for
524       Multiprocessor Systems. ACM Computing Surveys, vol. 43, no. 4, 2011.
525       http://www-users.cs.york.ac.uk/~robdavis/papers/MPSurveyv5.0.pdf
526  12 - U. C. Devi and J. H. Anderson. Tardiness Bounds under Global EDF
527       Scheduling on a Multiprocessor. Real-Time Systems Journal, vol. 32,
528       no. 2, pp 133-189, 2008.
529  13 - P. Valente and G. Lipari. An Upper Bound to the Lateness of Soft
530       Real-Time Tasks Scheduled by EDF on Multiprocessors. Proceedings of
531       the 26th IEEE Real-Time Systems Symposium, 2005.
532  14 - J. Erickson, U. Devi and S. Baruah. Improved tardiness bounds for
533       Global EDF. Proceedings of the 22nd Euromicro Conference on
534       Real-Time Systems, 2010.
535  15 - G. Lipari, S. Baruah, Greedy reclamation of unused bandwidth in
536       constant-bandwidth servers, 12th IEEE Euromicro Conference on Real-Time
537       Systems, 2000.
538  16 - L. Abeni, J. Lelli, C. Scordino, L. Palopoli, Greedy CPU reclaiming for
539       SCHED DEADLINE. In Proceedings of the Real-Time Linux Workshop (RTLWS),
540       Dusseldorf, Germany, 2014.
541  17 - L. Abeni, G. Lipari, A. Parri, Y. Sun, Multicore CPU reclaiming: parallel
542       or sequential?. In Proceedings of the 31st Annual ACM Symposium on Applied
543       Computing, 2016.
544  18 - J. Lelli, C. Scordino, L. Abeni, D. Faggioli, Deadline scheduling in the
545       Linux kernel, Software: Practice and Experience, 46(6): 821-839, June
546       2016.
547  19 - C. Scordino, L. Abeni, J. Lelli, Energy-Aware Real-Time Scheduling in
548       the Linux Kernel, 33rd ACM/SIGAPP Symposium On Applied Computing (SAC
549       2018), Pau, France, April 2018.
550
551
5524. Bandwidth management
553=======================
554
555 As previously mentioned, in order for -deadline scheduling to be
556 effective and useful (that is, to be able to provide "runtime" time units
557 within "deadline"), it is important to have some method to keep the allocation
558 of the available fractions of CPU time to the various tasks under control.
559 This is usually called "admission control" and if it is not performed, then
560 no guarantee can be given on the actual scheduling of the -deadline tasks.
561
562 As already stated in Section 3, a necessary condition to be respected to
563 correctly schedule a set of real-time tasks is that the total utilization
564 is smaller than M. When talking about -deadline tasks, this requires that
565 the sum of the ratio between runtime and period for all tasks is smaller
566 than M. Notice that the ratio runtime/period is equivalent to the utilization
567 of a "traditional" real-time task, and is also often referred to as
568 "bandwidth".
569 The interface used to control the CPU bandwidth that can be allocated
570 to -deadline tasks is similar to the one already used for -rt
571 tasks with real-time group scheduling (a.k.a. RT-throttling - see
572 Documentation/scheduler/sched-rt-group.rst), and is based on readable/
573 writable control files located in procfs (for system wide settings).
574 Notice that per-group settings (controlled through cgroupfs) are still not
575 defined for -deadline tasks, because more discussion is needed in order to
576 figure out how we want to manage SCHED_DEADLINE bandwidth at the task group
577 level.
578
579 A main difference between deadline bandwidth management and RT-throttling
580 is that -deadline tasks have bandwidth on their own (while -rt ones don't!),
581 and thus we don't need a higher level throttling mechanism to enforce the
582 desired bandwidth. In other words, this means that interface parameters are
583 only used at admission control time (i.e., when the user calls
584 sched_setattr()). Scheduling is then performed considering actual tasks'
585 parameters, so that CPU bandwidth is allocated to SCHED_DEADLINE tasks
586 respecting their needs in terms of granularity. Therefore, using this simple
587 interface we can put a cap on total utilization of -deadline tasks (i.e.,
588 \Sum (runtime_i / period_i) < global_dl_utilization_cap).
589
5904.1 System wide settings
591------------------------
592
593 The system wide settings are configured under the /proc virtual file system.
594
595 For now the -rt knobs are used for -deadline admission control and with
596 CONFIG_RT_GROUP_SCHED the -deadline runtime is accounted against the (root)
597 -rt runtime. With !CONFIG_RT_GROUP_SCHED the knob only serves for the -dl
598 admission control. We realize that this isn't entirely desirable; however, it
599 is better to have a small interface for now, and be able to change it easily
600 later. The ideal situation (see 5.) is to run -rt tasks from a -deadline
601 server; in which case the -rt bandwidth is a direct subset of dl_bw.
602
603 This means that, for a root_domain comprising M CPUs, -deadline tasks
604 can be created while the sum of their bandwidths stays below:
605
606   M * (sched_rt_runtime_us / sched_rt_period_us)
607
608 It is also possible to disable this bandwidth management logic, and
609 be thus free of oversubscribing the system up to any arbitrary level.
610 This is done by writing -1 in /proc/sys/kernel/sched_rt_runtime_us.
611
612
6134.2 Task interface
614------------------
615
616 Specifying a periodic/sporadic task that executes for a given amount of
617 runtime at each instance, and that is scheduled according to the urgency of
618 its own timing constraints needs, in general, a way of declaring:
619
620  - a (maximum/typical) instance execution time,
621  - a minimum interval between consecutive instances,
622  - a time constraint by which each instance must be completed.
623
624 Therefore:
625
626  * a new struct sched_attr, containing all the necessary fields is
627    provided;
628  * the new scheduling related syscalls that manipulate it, i.e.,
629    sched_setattr() and sched_getattr() are implemented.
630
631 For debugging purposes, the leftover runtime and absolute deadline of a
632 SCHED_DEADLINE task can be retrieved through /proc/<pid>/sched (entries
633 dl.runtime and dl.deadline, both values in ns). A programmatic way to
634 retrieve these values from production code is under discussion.
635
636
6374.3 Default behavior
638---------------------
639
640 The default value for SCHED_DEADLINE bandwidth is to have rt_runtime equal to
641 950000. With rt_period equal to 1000000, by default, it means that -deadline
642 tasks can use at most 95%, multiplied by the number of CPUs that compose the
643 root_domain, for each root_domain.
644 This means that non -deadline tasks will receive at least 5% of the CPU time,
645 and that -deadline tasks will receive their runtime with a guaranteed
646 worst-case delay respect to the "deadline" parameter. If "deadline" = "period"
647 and the cpuset mechanism is used to implement partitioned scheduling (see
648 Section 5), then this simple setting of the bandwidth management is able to
649 deterministically guarantee that -deadline tasks will receive their runtime
650 in a period.
651
652 Finally, notice that in order not to jeopardize the admission control a
653 -deadline task cannot fork.
654
655
6564.4 Behavior of sched_yield()
657-----------------------------
658
659 When a SCHED_DEADLINE task calls sched_yield(), it gives up its
660 remaining runtime and is immediately throttled, until the next
661 period, when its runtime will be replenished (a special flag
662 dl_yielded is set and used to handle correctly throttling and runtime
663 replenishment after a call to sched_yield()).
664
665 This behavior of sched_yield() allows the task to wake-up exactly at
666 the beginning of the next period. Also, this may be useful in the
667 future with bandwidth reclaiming mechanisms, where sched_yield() will
668 make the leftoever runtime available for reclamation by other
669 SCHED_DEADLINE tasks.
670
671
6725. Tasks CPU affinity
673=====================
674
675 Deadline tasks cannot have a cpu affinity mask smaller than the root domain they
676 are created on. So, using ``sched_setaffinity(2)`` won't work. Instead, the
677 the deadline task should be created in a restricted root domain. This can be
678 done using the cpuset controller of either cgroup v1 (deprecated) or cgroup v2.
679 See :ref:`Documentation/admin-guide/cgroup-v1/cpusets.rst <cpusets>` and
680 :ref:`Documentation/admin-guide/cgroup-v2.rst <cgroup-v2>` for more information.
681
6825.1 Using cgroup v1 cpuset controller
683-------------------------------------
684
685 An example of a simple configuration (pin a -deadline task to CPU0) follows::
686
687   mkdir /dev/cpuset
688   mount -t cgroup -o cpuset cpuset /dev/cpuset
689   cd /dev/cpuset
690   mkdir cpu0
691   echo 0 > cpu0/cpuset.cpus
692   echo 0 > cpu0/cpuset.mems
693   echo 1 > cpuset.cpu_exclusive
694   echo 0 > cpuset.sched_load_balance
695   echo 1 > cpu0/cpuset.cpu_exclusive
696   echo 1 > cpu0/cpuset.mem_exclusive
697   echo $$ > cpu0/tasks
698   chrt --sched-runtime 100000 --sched-period 200000 --deadline 0 yes > /dev/null
699
7005.2 Using cgroup v2 cpuset controller
701-------------------------------------
702
703 Assuming the cgroup v2 root is mounted at ``/sys/fs/cgroup``.
704
705   cd /sys/fs/cgroup
706   echo '+cpuset' > cgroup.subtree_control
707   mkdir deadline_group
708   echo 0 > deadline_group/cpuset.cpus
709   echo 'root' > deadline_group/cpuset.cpus.partition
710   echo $$ > deadline_group/cgroup.procs
711   chrt --sched-runtime 100000 --sched-period 200000 --deadline 0 yes > /dev/null
712
7136. Future plans
714===============
715
716 Still missing:
717
718  - programmatic way to retrieve current runtime and absolute deadline
719  - refinements to deadline inheritance, especially regarding the possibility
720    of retaining bandwidth isolation among non-interacting tasks. This is
721    being studied from both theoretical and practical points of view, and
722    hopefully we should be able to produce some demonstrative code soon;
723  - (c)group based bandwidth management, and maybe scheduling;
724  - access control for non-root users (and related security concerns to
725    address), which is the best way to allow unprivileged use of the mechanisms
726    and how to prevent non-root users "cheat" the system?
727
728 As already discussed, we are planning also to merge this work with the EDF
729 throttling patches [https://lore.kernel.org/r/cover.1266931410.git.fabio@helm.retis] but we still are in
730 the preliminary phases of the merge and we really seek feedback that would
731 help us decide on the direction it should take.
732
733Appendix A. Test suite
734======================
735
736 The SCHED_DEADLINE policy can be easily tested using two applications that
737 are part of a wider Linux Scheduler validation suite. The suite is
738 available as a GitHub repository: https://github.com/scheduler-tools.
739
740 The first testing application is called rt-app and can be used to
741 start multiple threads with specific parameters. rt-app supports
742 SCHED_{OTHER,FIFO,RR,DEADLINE} scheduling policies and their related
743 parameters (e.g., niceness, priority, runtime/deadline/period). rt-app
744 is a valuable tool, as it can be used to synthetically recreate certain
745 workloads (maybe mimicking real use-cases) and evaluate how the scheduler
746 behaves under such workloads. In this way, results are easily reproducible.
747 rt-app is available at: https://github.com/scheduler-tools/rt-app.
748
749 rt-app does not accept command line arguments, and instead reads from a JSON
750 configuration file. Here is an example ``config.json``:
751
752 .. code-block:: json
753
754  {
755    "tasks": {
756      "dl_task": {
757        "policy": "SCHED_DEADLINE",
758        "priority": 0,
759        "dl-runtime": 10000,
760        "dl-period": 100000,
761        "dl-deadline": 100000
762      },
763      "fifo_task": {
764        "policy": "SCHED_FIFO",
765        "priority": 10,
766        "runtime": 20000,
767        "sleep": 130000
768      }
769    },
770    "global": {
771      "duration": 5
772    }
773  }
774
775 On running ``rt-app config.json``, it creates 2 threads. The first one,
776 scheduled by SCHED_DEADLINE, executes for 10ms every 100ms. The second one,
777 scheduled at SCHED_FIFO priority 10, executes for 20ms every 150ms. The test
778 will run for a total of 5 seconds.
779
780 Please refer to the rt-app documentation for the JSON schema and more examples.
781
782 The second testing application is done using chrt which has support
783 for SCHED_DEADLINE.
784
785 The usage is straightforward::
786
787  # chrt -d -T 10000000 -D 100000000 0 ./my_cpuhog_app
788
789 With this, my_cpuhog_app is put to run inside a SCHED_DEADLINE reservation
790 of 10ms every 100ms (note that parameters are expressed in nanoseconds).
791 You can also use chrt to create a reservation for an already running
792 application, given that you know its pid::
793
794  # chrt -d -T 10000000 -D 100000000 -p 0 my_app_pid
795
796Appendix B. Minimal main()
797==========================
798
799 We provide in what follows a simple (ugly) self-contained code snippet
800 showing how SCHED_DEADLINE reservations can be created by a real-time
801 application developer::
802
803   #define _GNU_SOURCE
804   #include <unistd.h>
805   #include <stdio.h>
806   #include <stdlib.h>
807   #include <string.h>
808   #include <time.h>
809   #include <linux/unistd.h>
810   #include <linux/kernel.h>
811   #include <linux/types.h>
812   #include <sys/syscall.h>
813   #include <pthread.h>
814
815   #define gettid() syscall(__NR_gettid)
816
817   #define SCHED_DEADLINE	6
818
819   /* XXX use the proper syscall numbers */
820   #ifdef __x86_64__
821   #define __NR_sched_setattr		314
822   #define __NR_sched_getattr		315
823   #endif
824
825   #ifdef __i386__
826   #define __NR_sched_setattr		351
827   #define __NR_sched_getattr		352
828   #endif
829
830   #ifdef __arm__
831   #define __NR_sched_setattr		380
832   #define __NR_sched_getattr		381
833   #endif
834
835   static volatile int done;
836
837   struct sched_attr {
838	__u32 size;
839
840	__u32 sched_policy;
841	__u64 sched_flags;
842
843	/* SCHED_NORMAL, SCHED_BATCH */
844	__s32 sched_nice;
845
846	/* SCHED_FIFO, SCHED_RR */
847	__u32 sched_priority;
848
849	/* SCHED_DEADLINE (nsec) */
850	__u64 sched_runtime;
851	__u64 sched_deadline;
852	__u64 sched_period;
853   };
854
855   int sched_setattr(pid_t pid,
856		  const struct sched_attr *attr,
857		  unsigned int flags)
858   {
859	return syscall(__NR_sched_setattr, pid, attr, flags);
860   }
861
862   int sched_getattr(pid_t pid,
863		  struct sched_attr *attr,
864		  unsigned int size,
865		  unsigned int flags)
866   {
867	return syscall(__NR_sched_getattr, pid, attr, size, flags);
868   }
869
870   void *run_deadline(void *data)
871   {
872	struct sched_attr attr;
873	int x = 0;
874	int ret;
875	unsigned int flags = 0;
876
877	printf("deadline thread started [%ld]\n", gettid());
878
879	attr.size = sizeof(attr);
880	attr.sched_flags = 0;
881	attr.sched_nice = 0;
882	attr.sched_priority = 0;
883
884	/* This creates a 10ms/30ms reservation */
885	attr.sched_policy = SCHED_DEADLINE;
886	attr.sched_runtime = 10 * 1000 * 1000;
887	attr.sched_period = attr.sched_deadline = 30 * 1000 * 1000;
888
889	ret = sched_setattr(0, &attr, flags);
890	if (ret < 0) {
891		done = 0;
892		perror("sched_setattr");
893		exit(-1);
894	}
895
896	while (!done) {
897		x++;
898	}
899
900	printf("deadline thread dies [%ld]\n", gettid());
901	return NULL;
902   }
903
904   int main (int argc, char **argv)
905   {
906	pthread_t thread;
907
908	printf("main thread [%ld]\n", gettid());
909
910	pthread_create(&thread, NULL, run_deadline, NULL);
911
912	sleep(10);
913
914	done = 1;
915	pthread_join(thread, NULL);
916
917	printf("main dies [%ld]\n", gettid());
918	return 0;
919   }
920