1.. _scheduling_v2:
2
3Scheduling
4##########
5
6The kernel's priority-based scheduler allows an application's threads
7to share the CPU.
8
9Concepts
10********
11
12The scheduler determines which thread is allowed to execute
13at any point in time; this thread is known as the **current thread**.
14
15There are various points in time when the scheduler is given an
16opportunity to change the identity of the current thread, meaning
17when the scheduler switches the CPU's execution from one thread
18to another. These points are called **reschedule points**.
19Some potential reschedule points are:
20
21- transition of a thread from running state to a suspended or waiting
22  state, for example by :c:func:`k_sem_take` or :c:func:`k_sleep`.
23- transition of a thread to the :ref:`ready state <thread_states>`, for
24  example by :c:func:`k_sem_give` or :c:func:`k_thread_start`
25- return to thread context after processing an interrupt
26- when a running thread invokes :c:func:`k_yield`
27
28A thread **sleeps** when it voluntarily initiates an operation that
29transitions itself to a suspended or waiting state.
30
31Whenever the scheduler changes the identity of the current thread,
32or when execution of the current thread is replaced by an ISR,
33the kernel first saves the current thread's CPU register values.
34These register values get restored when the thread later resumes execution.
35
36
37Scheduling Algorithm
38====================
39
40The kernel's scheduler selects the highest priority ready thread
41to be the current thread. When multiple ready threads of the same priority
42exist, the scheduler chooses the one that has been waiting longest.
43
44A thread's relative priority is primarily determined by its static priority.
45However, when both earliest-deadline-first scheduling is enabled
46(:kconfig:option:`CONFIG_SCHED_DEADLINE`) and a choice of threads have equal
47static priority, then the thread with the earlier deadline is considered
48to have the higher priority. Thus, when earliest-deadline-first scheduling is
49enabled, two threads are only considered to have the same priority when both
50their static priorities and deadlines are equal. The routine
51:c:func:`k_thread_deadline_set` is used to set a thread's deadline.
52
53.. note::
54    Execution of ISRs takes precedence over thread execution,
55    so the execution of the current thread may be replaced by an ISR
56    at any time unless interrupts have been masked. This applies to both
57    cooperative threads and preemptive threads.
58
59
60The kernel can be built with one of several choices for the ready queue
61implementation, offering different choices between code size, constant factor
62runtime overhead and performance scaling when many threads are added.
63
64* Simple linked-list ready queue (:kconfig:option:`CONFIG_SCHED_SIMPLE`)
65
66  The scheduler ready queue will be implemented as a simple unordered list, with
67  very fast constant time performance for single threads and very low code size.
68  This implementation should be selected on systems with constrained code size
69  that will never see more than a small number (3, maybe) of runnable threads in
70  the queue at any given time.  On most platforms (that are not otherwise using
71  the red/black tree) this results in a savings of ~2k of code size.
72
73* Red/black tree ready queue (:kconfig:option:`CONFIG_SCHED_SCALABLE`)
74
75  The scheduler ready queue will be implemented as a red/black tree.  This has
76  rather slower constant-time insertion and removal overhead, and on most
77  platforms (that are not otherwise using the red/black tree somewhere) requires
78  an extra ~2kb of code. The resulting behavior will scale cleanly and
79  quickly into the many thousands of threads.
80
81  Use this for applications needing many concurrent runnable threads (> 20 or
82  so).  Most applications won't need this ready queue implementation.
83
84* Traditional multi-queue ready queue (:kconfig:option:`CONFIG_SCHED_MULTIQ`)
85
86  When selected, the scheduler ready queue will be implemented as the
87  classic/textbook array of lists, one per priority.
88
89  This corresponds to the scheduler algorithm used in Zephyr versions prior to
90  1.12.
91
92  It incurs only a tiny code size overhead vs. the "dumb" scheduler and runs in
93  O(1) time in almost all circumstances with very low constant factor.  But it
94  requires a fairly large RAM budget to store those list heads, and the limited
95  features make it incompatible with features like deadline scheduling that
96  need to sort threads more finely, and SMP affinity which need to traverse the
97  list of threads.
98
99  Typical applications with small numbers of runnable threads probably want the
100  simple scheduler.
101
102
103The wait_q abstraction used in IPC primitives to pend threads for later wakeup
104shares the same backend data structure choices as the scheduler, and can use
105the same options.
106
107* Scalable wait_q implementation (:kconfig:option:`CONFIG_WAITQ_SCALABLE`)
108
109  When selected, the wait_q will be implemented with a balanced tree.  Choose
110  this if you expect to have many threads waiting on individual primitives.
111  There is a ~2kb code size increase over :kconfig:option:`CONFIG_WAITQ_SIMPLE` (which may
112  be shared with :kconfig:option:`CONFIG_SCHED_SCALABLE`) if the red/black tree is not
113  used elsewhere in the application, and pend/unpend operations on "small"
114  queues will be somewhat slower (though this is not generally a performance
115  path).
116
117* Simple linked-list wait_q (:kconfig:option:`CONFIG_WAITQ_SIMPLE`)
118
119  When selected, the wait_q will be implemented with a doubly-linked list.
120  Choose this if you expect to have only a few threads blocked on any single
121  IPC primitive.
122
123Cooperative Time Slicing
124========================
125
126Once a cooperative thread becomes the current thread, it remains
127the current thread until it performs an action that makes it unready.
128Consequently, if a cooperative thread performs lengthy computations,
129it may cause an unacceptable delay in the scheduling of other threads,
130including those of higher priority and equal priority.
131
132
133  .. image:: cooperative.svg
134     :align: center
135
136To overcome such problems, a cooperative thread can voluntarily relinquish
137the CPU from time to time to permit other threads to execute.
138A thread can relinquish the CPU in two ways:
139
140* Calling :c:func:`k_yield` puts the thread at the back of the scheduler's
141  prioritized list of ready threads, and then invokes the scheduler.
142  All ready threads whose priority is higher or equal to that of the
143  yielding thread are then allowed to execute before the yielding thread is
144  rescheduled. If no such ready threads exist, the scheduler immediately
145  reschedules the yielding thread without context switching.
146
147* Calling :c:func:`k_sleep` makes the thread unready for a specified
148  time period. Ready threads of *all* priorities are then allowed to execute;
149  however, there is no guarantee that threads whose priority is lower
150  than that of the sleeping thread will actually be scheduled before
151  the sleeping thread becomes ready once again.
152
153Preemptive Time Slicing
154=======================
155
156Once a preemptive thread becomes the current thread, it remains
157the current thread until a higher priority thread becomes ready,
158or until the thread performs an action that makes it unready.
159Consequently, if a preemptive thread performs lengthy computations,
160it may cause an unacceptable delay in the scheduling of other threads,
161including those of equal priority.
162
163
164  .. image:: preemptive.svg
165     :align: center
166
167To overcome such problems, a preemptive thread can perform cooperative
168time slicing (as described above), or the scheduler's time slicing capability
169can be used to allow other threads of the same priority to execute.
170
171.. image:: timeslicing.svg
172   :align: center
173
174The scheduler divides time into a series of **time slices**, where slices
175are measured in system clock ticks. The time slice size is configurable,
176but this size can be changed while the application is running.
177
178At the end of every time slice, the scheduler checks to see if the current
179thread is preemptible and, if so, implicitly invokes :c:func:`k_yield`
180on behalf of the thread. This gives other ready threads of the same priority
181the opportunity to execute before the current thread is scheduled again.
182If no threads of equal priority are ready, the current thread remains
183the current thread.
184
185Threads with a priority higher than specified limit are exempt from preemptive
186time slicing, and are never preempted by a thread of equal priority.
187This allows an application to use preemptive time slicing
188only when dealing with lower priority threads that are less time-sensitive.
189
190.. note::
191   The kernel's time slicing algorithm does *not* ensure that a set
192   of equal-priority threads receive an equitable amount of CPU time,
193   since it does not measure the amount of time a thread actually gets to
194   execute. However, the algorithm *does* ensure that a thread never executes
195   for longer than a single time slice without being required to yield.
196
197Scheduler Locking
198=================
199
200A preemptible thread that does not wish to be preempted while performing
201a critical operation can instruct the scheduler to temporarily treat it
202as a cooperative thread by calling :c:func:`k_sched_lock`. This prevents
203other threads from interfering while the critical operation is being performed.
204
205Once the critical operation is complete the preemptible thread must call
206:c:func:`k_sched_unlock` to restore its normal, preemptible status.
207
208If a thread calls :c:func:`k_sched_lock` and subsequently performs an
209action that makes it unready, the scheduler will switch the locking thread out
210and allow other threads to execute. When the locking thread again
211becomes the current thread, its non-preemptible status is maintained.
212
213.. note::
214    Locking out the scheduler is a more efficient way for a preemptible thread
215    to prevent preemption than changing its priority level to a negative value.
216
217
218.. _thread_sleeping:
219
220Thread Sleeping
221===============
222
223A thread can call :c:func:`k_sleep` to delay its processing
224for a specified time period. During the time the thread is sleeping
225the CPU is relinquished to allow other ready threads to execute.
226Once the specified delay has elapsed the thread becomes ready
227and is eligible to be scheduled once again.
228
229A sleeping thread can be woken up prematurely by another thread using
230:c:func:`k_wakeup`. This technique can sometimes be used
231to permit the secondary thread to signal the sleeping thread
232that something has occurred *without* requiring the threads
233to define a kernel synchronization object, such as a semaphore.
234Waking up a thread that is not sleeping is allowed, but has no effect.
235
236.. _busy_waiting:
237
238Busy Waiting
239============
240
241A thread can call :c:func:`k_busy_wait` to perform a ``busy wait``
242that delays its processing for a specified time period
243*without* relinquishing the CPU to another ready thread.
244
245A busy wait is typically used instead of thread sleeping
246when the required delay is too short to warrant having the scheduler
247context switch from the current thread to another thread and then back again.
248
249Suggested Uses
250**************
251
252Use cooperative threads for device drivers and other performance-critical work.
253
254Use cooperative threads to implement mutually exclusion without the need
255for a kernel object, such as a mutex.
256
257Use preemptive threads to give priority to time-sensitive processing
258over less time-sensitive processing.
259
260
261Configuration Options
262**********************
263
264* :kconfig:option:`CONFIG_TIMESLICING`
265* :kconfig:option:`CONFIG_TIMESLICE_SIZE`
266* :kconfig:option:`CONFIG_TIMESLICE_PRIORITY`
267
268.. _cpu_idle:
269
270CPU Idling
271##########
272
273Although normally reserved for the idle thread, in certain special
274applications, a thread might want to make the CPU idle.
275
276.. contents::
277    :local:
278    :depth: 2
279
280Concepts
281********
282
283Making the CPU idle causes the kernel to pause all operations until an event,
284normally an interrupt, wakes up the CPU. In a regular system, the idle thread
285is responsible for this. However, in some constrained systems, it is possible
286that another thread takes this duty.
287
288Implementation
289**************
290
291Making the CPU idle
292===================
293
294Making the CPU idle is simple: call the k_cpu_idle() API. The CPU will stop
295executing instructions until an event occurs. Most likely, the function will
296be called within a loop. Note that in certain architectures, upon return,
297k_cpu_idle() unconditionally unmasks interrupts.
298
299.. code-block:: c
300
301    static k_sem my_sem;
302
303    void my_isr(void *unused)
304    {
305        k_sem_give(&my_sem);
306    }
307
308    int main(void)
309    {
310        k_sem_init(&my_sem, 0, 1);
311
312        /* wait for semaphore from ISR, then do related work */
313
314        for (;;) {
315
316            /* wait for ISR to trigger work to perform */
317            if (k_sem_take(&my_sem, K_NO_WAIT) == 0) {
318
319                /* ... do processing */
320
321            }
322
323            /* put CPU to sleep to save power */
324            k_cpu_idle();
325        }
326    }
327
328Making the CPU idle in an atomic fashion
329========================================
330
331It is possible that there is a need to do some work atomically before making
332the CPU idle. In such a case, k_cpu_atomic_idle() should be used instead.
333
334In fact, there is a race condition in the previous example: the interrupt could
335occur between the time the semaphore is taken, finding out it is not available
336and making the CPU idle again. In some systems, this can cause the CPU to idle
337until *another* interrupt occurs, which might be *never*, thus hanging the
338system completely. To prevent this, k_cpu_atomic_idle() should have been used,
339like in this example.
340
341.. code-block:: c
342
343    static k_sem my_sem;
344
345    void my_isr(void *unused)
346    {
347        k_sem_give(&my_sem);
348    }
349
350    int main(void)
351    {
352        k_sem_init(&my_sem, 0, 1);
353
354        for (;;) {
355
356            unsigned int key = irq_lock();
357
358            /*
359             * Wait for semaphore from ISR; if acquired, do related work, then
360             * go to next loop iteration (the semaphore might have been given
361             * again); else, make the CPU idle.
362             */
363
364            if (k_sem_take(&my_sem, K_NO_WAIT) == 0) {
365
366                irq_unlock(key);
367
368                /* ... do processing */
369
370
371            } else {
372                /* put CPU to sleep to save power */
373                k_cpu_atomic_idle(key);
374            }
375        }
376    }
377
378
379Suggested Uses
380**************
381
382Use k_cpu_atomic_idle() when a thread has to do some real work in addition to
383idling the CPU to wait for an event. See example above.
384
385Use k_cpu_idle() only when a thread is only responsible for idling the CPU,
386i.e. not doing any real work, like in this example below.
387
388.. code-block:: c
389
390    int main(void)
391    {
392        /* ... do some system/application initialization */
393
394
395        /* thread is only used for CPU idling from this point on */
396        for (;;) {
397            k_cpu_idle();
398        }
399    }
400
401.. note::
402     **Do not use these APIs unless absolutely necessary.** In a normal system,
403     the idle thread takes care of power management, including CPU idling.
404
405API Reference
406*************
407
408.. doxygengroup:: cpu_idle_apis
409