1@page page_thread_sync Inter-thread Synchronization 2 3In a multi-threaded real-time system, the completion of a task can often be accomplished through coordination of multiple threads, so how do these multiple threads collaborate well with each other to perform without errors? Here is an example. 4 5For example, two threads in one task: one thread receives data from the sensor and writes the data to shared memory, while another thread periodically reads data from the shared memory and sends it to display. The following figure depicts data transfer between two threads: 6 7 8 9If access to shared memory is not exclusive, then it may be accessed simultaneously by each thread, which causes data consistency issues. For example, before thread #2 (thread that can display data) attempts to display data, thread #1 (thread that can receive data) has not yet completed the writing in of data, then the display will contain data sampled at different times, causing the display data to be disordered. 10 11Thread #1 that writes the sensor data to the shared memory block and thread #2 that reads the sensor data from the shared memory block access the same memory block. In order to prevent data errors, the actions of the two threads must be mutually exclusive. One of threads should only be allowed after another thread completes its operation on the shared memory block. This way, thread #1 and thread #2 can work properly to execute this task correctly. 12 13Synchronization refers to running in a predetermined order. Thread synchronization refers to multiple threads controlling the execution order between threads through specific mechanisms (such as mutex, event object, critical section). In other words, establish a relationship of execution order by synchronization between threads and if there is no synchronization, the threads will be out-of-order. 14 15Multiple threads operate / access the same area (code), this block of code is called the critical section, the shared memory block in the above example is the critical section. Thread mutual exclusion refers to the exclusiveness of access to critical section resources. When multiple threads use critical section resources, only one thread is allowed each time. Other threads that want to use the resource must wait until the resource occupant releases the resource. Thread mutex can be seen as a special kind of thread synchronization. 16 17There are many ways to synchronize threads. The core idea is that **only one (or one kind of) thread is allowed to run when accessing the critical section.** There are many ways to enter/exit the critical section: 18 191) Call rt_hw_interrupt_disable() to enter the critical section and call rt_hw_interrupt_enable() to exit the critical section; see the *Global Interrupt Switch* in *Interrupt Management* for details. 20 212)Call rt_enter_critical() to enter the critical section and call rt_exit_critical() to exit the critical section. 22 23This chapter introduces several synchronization methods: **semaphores,** **mutex**, and **event**. After learning this chapter, you will learn how to use semaphores, mutex, and event to synchronize threads. 24 25# Semaphores 26 27Take parking lot as an example to understand the concept of semaphore: 28 29①When the parking lot is empty, the administrator of the parking lot finds that there are a lot of empty parking spaces. And then, cars outside will enter the parking lot and get parking spaces. 30 31②When the parking space of the parking lot is full, the administrator finds that there is no empty parking space. As a result, cars outside will be prohibited from entering the parking lot, and they will be waiting in line; 32 33③When cars are leaving the parking lot, the administrator finds that there are empty parking spaces for cars outside to enter the parking lot; after the empty parking spaces are taken, cars outside are prohibited from entering. 34 35In this example, the administrator is equivalent to the semaphore. The number of empty parking spaces that the administrator is in charge of is the value of the semaphore (non-negative, dynamic change); the parking space is equivalent to the common resource (critical section), and the cars are equivalent to the threads. Cars access the parking spaces by obtaining permission from the administrator, which is similar to thread accessing public resource by obtaining the semaphore. 36 37## Semaphore Working Mechanism 38 39Semaphore is a light-duty kernel object that can solve the problems of synchronization between threads. By obtaining or releasing semaphore, a thread can achieve synchronization or mutual exclusion. 40 41Schematic diagram of semaphore is shown in the figure below. Each semaphore object has a semaphore value and a thread waiting queue. Semaphore value corresponds to the actual number of instances of semaphore object and the number of resources. If the semaphore value is 5, it means that there are 5 semaphore instances (resources) that can be used. If the number of semaphore instances is zero, the thread that is applying for the semaphore will be suspended on the waiting queue of the semaphore, waiting for available semaphore instances (resources). 42 43 44 45## Semaphore Control Block 46 47In RT-Thread, semaphore control block is a data structure used by the operating system to manage semaphores, represented by struct rt rt_semaphore. Another C expression is rt_sem_t, which represents the handle of the semaphore, and the implementation in C language is a pointer to the semaphore control block. The detailed definition of semaphore control block structure is as follows: 48 49```c 50struct rt_semaphore 51{ 52 struct rt_ipc_object parent; /* Inherited from the ipc_object class */ 53 rt_uint16_t value; /* Semaphore Value */ 54}; 55/* rt_sem_t is the type of pointer pointing to semaphore structure */ 56typedef struct rt_semaphore* rt_sem_t; 57``` 58 59rt_semaphore object is derived from rt_ipc_object and is managed by the IPC container. The maximum semaphore is 65535. 60 61## Semaphore Management 62 63Semaphore control block contains important parameters related to the semaphore and acts as a link between various states of the semaphore. Interfaces related to semaphore are shown in the figure below. Operations on a semaphore includes: creating/initializing the semaphore, obtaining the semaphore, releasing the semaphore, and deleting/detaching the semaphore. 64 65 66 67### Create and Delete Semaphore 68 69When creating a semaphore, the kernel first creates a semaphore control block, then performs basic initialization on the control block. The following function interface is used to create a semaphore: 70 71```c 72 rt_sem_t rt_sem_create(const char *name, 73 rt_uint32_t value, 74 rt_uint8_t flag); 75``` 76 77When this function is called, the system will first allocate a semaphore object from the object manager and initialize the object, and then initialize the parent class IPC object and semaphore-related parts. Among parameters specified in the creation of semaphore, semaphore flag parameter determines the queuing way of how multiple threads wait when the semaphore is not available. When the RT_IPC_FLAG_FIFO (first in, first out) mode is selected, the waiting thread queue will be queued in a first in first out manner. The first thread that goes in will firstly obtain the waiting semaphore. When the RT_IPC_FLAG_PRIO (priority waiting) mode is selected, the waiting threads will be queued in order of priority. Threads waiting with the highest priority will get the wait semaphore first. The following table describes the input parameters and return values for this function: 78 79Input parameters and return values of rt_sem_create() 80 81|Parameters |Description | 82|--------------------|-------------------------------------------------------------------| 83| name | Semaphore Name | 84| value | Semaphore Initial Value | 85| flag | Semaphore flag, which can be the following values: RT_IPC_FLAG_FIFO or RT_IPC_FLAG_PRIO | 86|**Return** | —— | 87| RT_NULL | Creation failed | 88| semaphore control block pointer | Creation successful | 89 90For dynamically created semaphores, when the system no longer uses semaphore, they can be removed to release system resources. To delete a semaphore, use the following function interface: 91 92```c 93rt_err_t rt_sem_delete(rt_sem_t sem); 94``` 95 96When this function is called, the system will delete this semaphore. If there is a thread waiting for this semaphore when it is being deleted, the delete operation will first wake up the thread waiting on the semaphore (return value of the waiting thread is - RT_ERROR), then release the semaphore's memory resources. The following table describes the input parameters and return values for this function: 97 98Input parameters and return values of rt_sem_delete() 99 100|Parameters|Description | 101|----------|----------------------------------| 102| sem | Semaphore object created by rt_sem_create() | 103|**Return**| —— | 104| RT_EOK | Successfully deleted | 105 106### Initialize and Detach Semaphore 107 108For a static semaphore object, its memory space is allocated by the compiler during compiling and placed on the read-write data segment or on the uninitialized data segment. In this case,rt_sem_create interface is no longer needed to create the semaphore to use it, just initialize it before using it. To initialize the semaphore object, use the following function interface: 109 110```c 111rt_err_t rt_sem_init(rt_sem_t sem, 112 const char *name, 113 rt_uint32_t value, 114 rt_uint8_t flag) 115``` 116 117When this function is called, the system will initialize the semaphore object, then initialize the IPC object and parts related to the semaphore. The flag mentioned above in semaphore function creation can be used as the semaphore flag here. The following table describes the input parameters and return values for this function: 118 119Input parameters and return values of rt_sem_init() 120 121|**Parameters**|**Description** | 122|----------|-------------------------------------------------------------------| 123| sem | Semaphore object handle | 124| name | Semaphore name | 125| value | Semaphore initial value | 126| flag | Semaphore flag, which can be the following values: RT_IPC_FLAG_FIFO or RT_IPC_FLAG_PRIO | 127|**Return**| —— | 128| RT_EOK | Initialization successful | 129 130For statically initialized semaphore, detaching the semaphore is letting the semaphore object detach from the kernel object manager. To detach the semaphore, use the following function interface: 131 132```c 133rt_err_t rt_sem_detach(rt_sem_t sem); 134``` 135 136After using this function, the kernel wakes up all threads suspended in the semaphore wait queue and then detaches the semaphore from the kernel object manager. The waiting thread that was originally suspended on the semaphore will get the return value of -RT_ERROR. The following table describes the input parameters and return values for this function: 137 138 Input parameters and return values of rt_sem_detach() 139 140|Parameters|**Description** | 141|----------|------------------| 142| sem | Semaphore object handle | 143|**Return**| —— | 144| RT_EOK | Successfully detached | 145 146### Obtain Semaphore 147 148Thread obtains semaphore resource instances by obtaining semaphores. When the semaphore value is greater than zero, the thread will get the semaphore, and the corresponding semaphore value will be reduced by 1. The semaphore is obtained using the following function interface: 149 150```c 151rt_err_t rt_sem_take (rt_sem_t sem, rt_int32_t time); 152``` 153 154When calling this function, if the value of the semaphore is zero, it means the current semaphore resource instance is not available, and the thread applying for the semaphore will choose according to the time parameters to either return directly, or suspend for a period of time, or wait forever. While waiting, if other threads or ISR released the semaphore, then the thread will stop the waiting. If the semaphore is still not available after parameter specified time, the thread will time out and return, return value is -RT_ETIMEOUT. The following table describes the input parameters and return values for this function: 155 156 Input parameters and return values of rt_sem_take() 157 158|Parameters |Description | 159|---------------|---------------------------------------------------| 160| sem | Semaphore object handle | 161| time | Specified wait time, unit is operating system clock tick (OS Tick) | 162|**Return** | —— | 163| RT_EOK | Semaphore obtained successfully | 164| \-RT_ETIMEOUT | Did not received semaphore after timeout | 165| \-RT_ERROR | Other errors | 166 167### Obtain Semaphore without Waiting 168 169When user does not want to suspend thread on the applied semaphore and wait, the semaphore can be obtained using the wait-free mode , and the following function interface is used for obtaining the semaphore without waiting: 170 171```c 172rt_err_t rt_sem_trytake(rt_sem_t sem); 173``` 174 175This function has the same effect as rt_sem_take(sem, 0), which means when the semaphore resource instance requested by the thread is not available, it will not wait on the semaphore, instead it returns to -RT_ETIMEOUT directly. The following table describes the input parameters and return values for this function: 176 177 Input parameters and return values for rt_sem_trytake() 178 179|**Parameter** |Description | 180|---------------|------------------| 181| sem | Semaphore object handle | 182|**Return** | —— | 183| RT_EOK | Semaphore successfully obtained | 184| \-RT_ETIMEOUT | Semaphore obtainment failed | 185 186### Semaphore Release 187 188Semaphore is released to wake up the thread that suspends on the semaphore. To release the semaphore, use the following function interface: 189 190```c 191rt_err_t rt_sem_release(rt_sem_t sem); 192``` 193 194For example, when semaphore value is zero and a thread is waiting for this semaphore, releasing the semaphore will wake up the first thread waiting in the thread queue of the semaphore, and this thread will obtain the semaphore; otherwise value of the semaphore will plus 1. The following table describes the input parameters and return values of the function: 195 196 Input parameters and return values of rt_sem_release() 197 198|**Parameters**|Description | 199|----------|------------------| 200| sem | Semaphore object handle | 201|**Return**| —— | 202| RT_EOK | Semaphore successfully released | 203 204## Semaphore Application Sample 205 206This is a sample of semaphore usage routine. This routine creates a dynamic semaphore and initializes two threads, one thread sends the semaphore, and one thread receives the semaphore and performs the corresponding operations. As shown in the following code: 207 208Use of semaphore 209 210 211```c 212#include <rtthread.h> 213 214#define THREAD_PRIORITY 25 215#define THREAD_TIMESLICE 5 216 217/* pointer to semaphore */ 218static rt_sem_t dynamic_sem = RT_NULL; 219 220ALIGN(RT_ALIGN_SIZE) 221static char thread1_stack[1024]; 222static struct rt_thread thread1; 223static void rt_thread1_entry(void *parameter) 224{ 225 static rt_uint8_t count = 0; 226 227 while(1) 228 { 229 if(count <= 100) 230 { 231 count++; 232 } 233 else 234 return; 235 236 /* count release semaphore every 10 counts */ 237 if(0 == (count % 10)) 238 { 239 rt_kprintf("t1 release a dynamic semaphore.\n"); 240 rt_sem_release(dynamic_sem); 241 } 242 } 243} 244 245ALIGN(RT_ALIGN_SIZE) 246static char thread2_stack[1024]; 247static struct rt_thread thread2; 248static void rt_thread2_entry(void *parameter) 249{ 250 static rt_err_t result; 251 static rt_uint8_t number = 0; 252 while(1) 253 { 254 /* permanently wait for the semaphore; once obtain the semaphore, perform the number self-add operation */ 255 result = rt_sem_take(dynamic_sem, RT_WAITING_FOREVER); 256 if (result != RT_EOK) 257 { 258 rt_kprintf("t2 take a dynamic semaphore, failed.\n"); 259 rt_sem_delete(dynamic_sem); 260 return; 261 } 262 else 263 { 264 number++; 265 rt_kprintf("t2 take a dynamic semaphore. number = %d\n" ,number); 266 } 267 } 268} 269 270/* initialization of the semaphore sample */ 271int semaphore_sample(void) 272{ 273 /* create a dynamic semaphore with an initial value of 0 */ 274 dynamic_sem = rt_sem_create("dsem", 0, RT_IPC_FLAG_FIFO); 275 if (dynamic_sem == RT_NULL) 276 { 277 rt_kprintf("create dynamic semaphore failed.\n"); 278 return -1; 279 } 280 else 281 { 282 rt_kprintf("create done. dynamic semaphore value = 0.\n"); 283 } 284 285 rt_thread_init(&thread1, 286 "thread1", 287 rt_thread1_entry, 288 RT_NULL, 289 &thread1_stack[0], 290 sizeof(thread1_stack), 291 THREAD_PRIORITY, THREAD_TIMESLICE); 292 rt_thread_startup(&thread1); 293 294 rt_thread_init(&thread2, 295 "thread2", 296 rt_thread2_entry, 297 RT_NULL, 298 &thread2_stack[0], 299 sizeof(thread2_stack), 300 THREAD_PRIORITY-1, THREAD_TIMESLICE); 301 rt_thread_startup(&thread2); 302 303 return 0; 304} 305/* export to msh command list */ 306MSH_CMD_EXPORT(semaphore_sample, semaphore sample); 307``` 308 309Simulation results: 310 311``` 312 \ | / 313- RT - Thread Operating System 314 / | \ 3.1.0 build Aug 27 2018 315 2006 - 2018 Copyright by rt-thread team 316msh >semaphore_sample 317create done. dynamic semaphore value = 0. 318msh >t1 release a dynamic semaphore. 319t2 take a dynamic semaphore. number = 1 320t1 release a dynamic semaphore. 321t2 take a dynamic semaphore. number = 2 322t1 release a dynamic semaphore. 323t2 take a dynamic semaphore. number = 3 324t1 release a dynamic semaphore. 325t2 take a dynamic semaphore. number = 4 326t1 release a dynamic semaphore. 327t2 take a dynamic semaphore. number = 5 328t1 release a dynamic semaphore. 329t2 take a dynamic semaphore. number = 6 330t1 release a dynamic semaphore. 331t2 take a dynamic semaphore. number = 7 332t1 release a dynamic semaphore. 333t2 take a dynamic semaphore. number = 8 334t1 release a dynamic semaphore. 335t2 take a dynamic semaphore. number = 9 336t1 release a dynamic semaphore. 337t2 take a dynamic semaphore. number = 10 338``` 339 340As the result of the operation above: Thread 1 sends a semaphore when the count is multiple of 10 (the thread exits after the count reaches 100), and thread 2 adds one on top of the number after receiving the semaphore. 341 342Another semaphore application routine is shown below. This sample will use two threads and three semaphores to implement an example of producers and consumers. Among them: 343 344Another application routine for semaphores is as follows: 345 346To be more accurate, the producer consumer model is actually a "producer-consumer-warehouse" model. We call the available spots in the warehouse "empty seats", and once an available spot ("empty seat") is taken, we then call it a "full seat". For this model, the following points should be clarified: 3471、The producer only produces when the warehouse is not full; the producer will stop production when the warehouse is full. 3482、The consumer can consume only when there are products in the warehouse; consumers will wait if the warehouse is empty. 3493、When the consumer consumes, the warehouse is not full anymore, and then the producer will be notified to produce again. 3504、The producer should notify the consumer to consumer after it produces consumable products. 351 352 353This routine will use two threads and three semaphores to implement examples of producers and consumers. Among them 354The three semaphores in the example are: 355①sem_lock: This semaphore acts as a lock because both threads operate on the same array array which means this array is a shared resource and sem_lock is used to protect this shared resource. 356②sem_empty: Its value is used to indicate the number of "warehouse" available seats , and the value of sem_empty is initialized to 5, indicating that there are 5 "empty seats" . 357③sem_full:Its value is used to indicate the number of "full seats" in the "warehouse", and the value of sem_full is initialized to 0, indicating that there are 0 "full seats". 358 359The 2 threads in the example are: 360①Producer thread: After obtaining the available seat (sem_empty value minus 1), generate a number, loop into the array, and then release a "full seat"(sem_full value plus 1). 361②Consumer thread: After getting the "full seat" (the value of sem_full is decremented by 1), the contents of the array are read and added, and then an "empty seat" is released (the value of sem_empty is increased by 1). 362 363Producer consumer routine 364 365```c 366#include <rtthread.h> 367 368#define THREAD_PRIORITY 6 369#define THREAD_STACK_SIZE 512 370#define THREAD_TIMESLICE 5 371 372/* Define a maximum of 5 elements to be generated */ 373#define MAXSEM 5 374 375/* An array of integers used to place production */ 376rt_uint32_t array[MAXSEM]; 377 378/* Point to the producer and consumer's read-write position in the array */ 379static rt_uint32_t set, get; 380 381/* Pointer to the thread control block */ 382static rt_thread_t producer_tid = RT_NULL; 383static rt_thread_t consumer_tid = RT_NULL; 384 385struct rt_semaphore sem_lock; 386struct rt_semaphore sem_empty, sem_full; 387 388/* Pointer to the thread control block */ 389void producer_thread_entry(void *parameter) 390{ 391 int cnt = 0; 392 393 /* Run for 10 times*/ 394 while (cnt < 10) 395 { 396 /* Obtain one vacancy */ 397 rt_sem_take(&sem_empty, RT_WAITING_FOREVER); 398 399 /* Modify array content, lock */ 400 rt_sem_take(&sem_lock, RT_WAITING_FOREVER); 401 array[set % MAXSEM] = cnt + 1; 402 rt_kprintf("the producer generates a number: %d\n", array[set % MAXSEM]); 403 set++; 404 rt_sem_release(&sem_lock); 405 406 /* 发布一个满位 */ 407 rt_sem_release(&sem_full); 408 cnt++; 409 410 /* Pause for a while */ 411 rt_thread_mdelay(20); 412 } 413 414 rt_kprintf("the producer exit!\n"); 415} 416 417/* Consumer thread entry */ 418void consumer_thread_entry(void *parameter) 419{ 420 rt_uint32_t sum = 0; 421 422 while (1) 423 { 424 /* obtain a "full seat" */ 425 rt_sem_take(&sem_full, RT_WAITING_FOREVER); 426 427 /* Critical region, locked for operation */ 428 rt_sem_take(&sem_lock, RT_WAITING_FOREVER); 429 sum += array[get % MAXSEM]; 430 rt_kprintf("the consumer[%d] get a number: %d\n", (get % MAXSEM), array[get % MAXSEM]); 431 get++; 432 rt_sem_release(&sem_lock); 433 434 /* Release one vacancy */ 435 rt_sem_release(&sem_empty); 436 437 /* The producer produces up to 10 numbers, stops, and the consumer thread stops accordingly */ 438 if (get == 10) break; 439 440 /* Pause for a while */ 441 rt_thread_mdelay(50); 442 } 443 444 rt_kprintf("the consumer sum is: %d\n", sum); 445 rt_kprintf("the consumer exit!\n"); 446} 447 448int producer_consumer(void) 449{ 450 set = 0; 451 get = 0; 452 453 /* Initialize 3 semaphores */ 454 rt_sem_init(&sem_lock, "lock", 1, RT_IPC_FLAG_FIFO); 455 rt_sem_init(&sem_empty, "empty", MAXSEM, RT_IPC_FLAG_FIFO); 456 rt_sem_init(&sem_full, "full", 0, RT_IPC_FLAG_FIFO); 457 458 /* Create producer thread */ 459 producer_tid = rt_thread_create("producer", 460 producer_thread_entry, RT_NULL, 461 THREAD_STACK_SIZE, 462 THREAD_PRIORITY - 1, 463 THREAD_TIMESLICE); 464 if (producer_tid != RT_NULL) 465 { 466 rt_thread_startup(producer_tid); 467 } 468 else 469 { 470 rt_kprintf("create thread producer failed"); 471 return -1; 472 } 473 474 /* Create consumer thread */ 475 consumer_tid = rt_thread_create("consumer", 476 consumer_thread_entry, RT_NULL, 477 THREAD_STACK_SIZE, 478 THREAD_PRIORITY + 1, 479 THREAD_TIMESLICE); 480 if (consumer_tid != RT_NULL) 481 { 482 rt_thread_startup(consumer_tid); 483 } 484 else 485 { 486 rt_kprintf("create thread consumer failed"); 487 return -1; 488 } 489 490 return 0; 491} 492 493/* Export to msh command list */ 494MSH_CMD_EXPORT(producer_consumer, producer_consumer sample); 495``` 496 497The simulation results for this routine are as follows: 498 499``` 500\ | / 501- RT - Thread Operating System 502 / | \ 3.1.0 build Aug 27 2018 503 2006 - 2018 Copyright by rt-thread team 504msh >producer_consumer 505the producer generates a number: 1 506the consumer[0] get a number: 1 507msh >the producer generates a number: 2 508the producer generates a number: 3 509the consumer[1] get a number: 2 510the producer generates a number: 4 511the producer generates a number: 5 512the producer generates a number: 6 513the consumer[2] get a number: 3 514the producer generates a number: 7 515the producer generates a number: 8 516the consumer[3] get a number: 4 517the producer generates a number: 9 518the consumer[4] get a number: 5 519the producer generates a number: 10 520the producer exit! 521the consumer[0] get a number: 6 522the consumer[1] get a number: 7 523the consumer[2] get a number: 8 524the consumer[3] get a number: 9 525the consumer[4] get a number: 10 526the consumer sum is: 55 527the consumer exit! 528``` 529 530This routine can be understood as the process of producers producing products and putting them into the warehouse and the consumer taking the products from the warehouse. 531 532(1) Producer thread: 533 5341) Obtain 1 "empty seat" (put product number), now the number of "empty seats" is decremented by 1; 535 5362) Lock protection; the generated number value is cnt+1, and the value is looped into the array; then unlocked; 537 5383) Release 1 "full seat" (put one product into the warehouse, there will be one more "full seat" in the warehouse), add 1 to the number "full seats". 539 540(2) Consumer thread: 541 5421)Obtain 1 "full seat" (take product number), then the the number of "full seats" is decremented by 1; 543 5442)Lock protection; read the number produced by the producer from array and add it to the last number; then unlock it; 545 5463)Release 1 "empty seat" (take one product from the warehouse, then there is one more "empty seat" in the warehouse), add 1 to the number of "empty seats". 547 548The producer generates 10 numbers in turn, and the consumers take them away in turn and sum the values of the 10 numbers. Semaphore lock protects array critical region resources, ensuring the exclusivity of number taking for the consumers each time and achieving inter-thread synchronization. 549 550## Semaphore Usage Occasion 551 552Semaphores is a very flexible way to synchronize and can be used in a variety of situations, like forming locks, synchronization, resource counts, etc. It can also be conveniently used for synchronization between threads and threads, interrupts and threads. 553 554### Thread Synchronization 555 556Thread synchronization is one of the simplest types of semaphore applications. For example, using semaphores to synchronize between two threads, the value of the semaphore is initialized to 0, indicating that there are 0 semaphore resource instances; and the thread attempting to acquire the semaphore will wait directly on this semaphore. 557 558When the thread holding the semaphore completes the work it is processing, it will release this semaphore. Thread waiting on this semaphore can be awaken, and it can then perform the next part of the work. This occasion can also be seen as using the semaphore for the work completion flag: the thread holding the semaphore completes its own work, and then notifies the thread waiting for the semaphore to continue the next part of the work. 559 560### Lock 561 562Locks, a single lock is often applied to multiple threads accessing the same shared resource (in other words, critical region). When semaphore is used as a lock, the semaphore resource instance should normally be initialized to 1, indicating that the system has one resource available by default. Because the semaphore value always varies between 1 and 0, so this type of lock is also called binary semaphore. As shown in the following figure, when a thread needs to access shared resource, it needs to obtain the resource lock first. When this thread successfully obtains the resource lock, other threads that intend to access the shared resource will suspend because they cannot obtain the resource. This is because it is already locked (semaphore value is 0) when other threads are trying to obtain the lock. When thread holding the semaphore is processed and exiting the critical region, it will release the semaphore and unlock the lock, and the first waiting thread that is suspending on the lock will be awaken to gain access to the critical region. 563 564 565 566### Interrupt Synchronization between Threads 567 568Semaphore can also be easily applied to interrupting synchronization between threads, such as an interrupt trigger. When interrupting service routine, thread needs to be notifies to perform corresponding data processing. At this time, the initial value of the semaphore can be set to 0. When the thread tries to hold this semaphore, since the initial value of the semaphore is 0, the thread will then suspends on this semaphore until the semaphore is released. When the interrupt is triggered, hardware-related actions are firstly performed, such as reading corresponding data from the hardware I/O port, and confirming the interrupt to clear interrupt source, and then releasing a semaphore to wake up the corresponding thread for subsequent data processing. For example, the processing of FinSH threads is as shown in the following figure. 569 570 571 572The value of the semaphore is initially 0. When the FinSH thread attempts to obtain the semaphore, it will be suspended because the semaphore value is 0. When the console device has data input, an interrupt is generated to enter the interrupt service routine. In the interrupt service routine, it reads the data of the console device, puts the read data into the UART buffer for buffering, and then releases the semaphore. The semaphore release will wake up the shell thread. After the interrupt service routine has finished, if there are no ready threads with higher priority than the shell thread in the system, the shell thread will hold the semaphore and run, obtaining the input data from the UART buffer. 573 574>The mutual exclusion between interrupts and threads cannot be done by means of semaphores (locks), but by means of switch interrupts. 575 576### Resource Count 577 578Semaphore can also be considered as an incrementing or decrementing counter. It should be noted that the semaphore value is non-negative. For example, if the value of a semaphore is initialized to 5, then the semaphore can be reduced by a maximum of 5 consecutive times until the counter is reduced to zero. Resource count is suitable for occasions where the processing speeds between threads do not match. At this time, the semaphore can be counted as the number of completed tasks of the previous thread, and when dispatched to the next thread, it can also be used in a continuous manner handling multiple events each time. For example, in the producer and consumer problem, the producer can release the semaphore multiple times, and then the consumer can process multiple semaphore resources each time when dispatched. 579 580>Generally, resource count is mostly inter-thread synchronization in a hybrid mode, because there are still multiple accesses from threads for a single resource processing, which requires accessing and processing for a single resource, and operate lock mutex operation. 581 582# Mutex 583 584Mutexes, also known as mutually exclusive semaphores, are a special binary semaphore. Mutex is similar to a parking lot with only one parking space: when one car enters, the parking lot gate is locked and other vehicles are waiting outside. When the car inside comes out, parking lot gate will open and the next car can enter. 585 586## Mutex Working Mechanism 587 588The difference between a mutex and a semaphore is that the thread with a mutex has ownership of the mutex, mutex supports recursive access and prevents thread priority from reversing; and mutex can only be released by the thread holding it, whereas semaphore can be released by any thread. 589 590There are only two states of mutex, unlocked and locked (two state values). When a thread holds it, then the mutex is locked and its ownership is obtained by this thread. Conversely, when this thread releases it, it unlocks the mutex and loses its ownership. When a thread is holding a mutex, other threads will not be able to unlock this mutex or hold it. The thread holding the mutex can also acquire the lock again without being suspended, as shown in the following figure. This feature is quite different from the general binary semaphore: in semaphore, because there is no instance, the thread will suspend if the thread recursively holds the semaphore (which eventually leads to a deadlock). 591 592 593 594Another potential problem with using semaphores is thread priority inversion. The so-called priority inversion is when a high-priority thread attempts to access shared resource through the semaphore mechanism, if the semaphore is already held by a low-priority thread which may happen to be preempted by other medium-priority threads while running, this leads to high-priority threads being blocked by many lower-priority threads which means instantaneity is difficult to guarantee. As shown in the following figure: There are three threads with the priority levels A, B, and C, priority A> B > C. Threads A and B are in suspended state, waiting for an event to trigger; thread C is running, and thread C starts using a shared resource M. While using the resource, the event thread A is waiting for occurs and thread A switches to ready state because it has higher priority than thread C, so it executes immediately. But when thread A wants to use shared resource M, because it is being used by thread C, thread A is suspended and thread C is running. If the event thread B is waiting for occurs, thread B switches to ready state. Since thread B has a higher priority than thread C, thread B starts running, thread C won't run until thread B finishes. Thread A is only executed when thread C releases the shared resource M. In this case, the priority has been reversed: thread B runs before thread A. This does not guarantee the response time for high priority threads. 595 596 597 598In the RT-Thread operating system, mutex can solve the priority inversion problem and implement the priority inheritance algorithm. Priority inheritance solves the problems caused by priority inversion by raising the priority of thread C to the priority of thread A during the period during when thread A is suspended trying to access the shared resource. This prevents C (indirectly preventing A) from being preempted by B, as shown in the following figure. Priority inheritance refers to raising the priority of a low-priority thread that occupies a certain resource, making the priority level of the low-priority thread to be equal to the priority of the thread with the highest priority level among all threads waiting for the resource, and then executes. When this is low-priority releases the resource, the priority level returns to the initial setting. Therefore, threads that inherit priority help to prevent the system resources from being preempted by any intermediate-priority thread. 599 600 601 602>After the mutex is obtained, release the mutex as soon as possible. During the time when holding the mutex, you must not change the priority of the thread holding the mutex. 603 604## Mutex Control Block 605 606In RT-Thread, mutex control block is a data structure used by the operating system to manage mutexes, represented by the struct rt rt_mutex. Another C expression, rt_mutex_t, represents the handle of the mutex, and the implementation in C language refers to the pointer of the mutex control block. See the following code for a detailed definition of the mutex control block structure: 607 608```c 609struct rt_mutex 610 { 611 struct rt_ipc_object parent; /* inherited from the ipc_object class */ 612 613 rt_uint16_t value; /* mutex value */ 614 rt_uint8_t original_priority; /* hold the original priority of the thread */ 615 rt_uint8_t hold; /* number of times holding the threads */ 616 struct rt_thread *owner; /* thread that currently owns the mutex */ 617 }; 618 /* rt_mutext_t pointer type of the one poniter pointing to the mutex structure */ 619 typedef struct rt_mutex* rt_mutex_t; 620``` 621 622The rt_mutex object is derived from rt_ipc_object and is managed by the IPC container. 623 624## Mutex Management 625 626The mutex control block contains important parameters related to mutex and it plays an important role in the implementation of the mutex function. The mutex-related interface is as shown in the following figure. The operation of a mutex includes: creating/initiating a mutex, obtaining a mutex, releasing a mutex, and deleting/detaching a mutex. 627 628 629 630### Create and Delete Mutex 631 632When creating a mutex, the kernel first creates a mutex control block and then completes the initialization of the control block. Create a mutex using the following function interface: 633 634```c 635rt_mutex_t rt_mutex_create (const char* name, rt_uint8_t flag); 636``` 637 638You can call the rt_mutex_create function to create a mutex whose name is designated by name. When this function is called, the system will first allocate a mutex object from the object manager, initialize the object, and then initialize the parent class IPC object and the mutex-related part. The flag of the mutex is set to RT_IPC_FLAG_PRIO, which means that when multiple threads are waiting for resources, the resources will be accessed by the thread with higher priority. The flag is set to RT_IPC_FLAG_FIFO, which means that when multiple threads are waiting for resources, resources are being accesses in a first-come-first-served order. The following table describes the input parameters and return values for this function: 639 640Input parameters and return values of rt_mutex_create() 641 642|Parameters |**Description** | 643|------------|-------------------------------------------------------------------| 644| name | Mutex name | 645| flag | Mutex flag, which can take the following values: RT_IPC_FLAG_FIFO or RT_IPC_FLAG_PRIO | 646|**Return** | —— | 647| Mutex handle | Created successfully | 648| RT_NULL | Creation failed | 649 650For dynamically created mutex, when the mutex is no longer used, the system resource is released by removing the mutex. To remove a mutex, use the following function interface: 651 652```c 653rt_err_t rt_mutex_delete (rt_mutex_t mutex); 654``` 655 656When a mutex is deleted, all threads waiting for this mutex will be woken up, return value for the waiting threads is - RT_ERROR. The system then removes the mutex from the kernel object manager linked list and releases the memory space occupied by the mutex. The following table describes the input parameters and return values for this function: 657 658Input parameters and return values of rt_mutex_delete() 659 660|Parameters|Description | 661|----------|------------------| 662| mutex | The handle of the mutex object | 663|**Return**| —— | 664| RT_EOK | Deleted successfully | 665 666### Initialize and Detach Mutex 667 668The memory of a static mutex object is allocated by the compiler during system compilation, and is usually placed in a read-write data segment or an uninitialized data segment. Before using such static mutex objects, you need to initialize them first. To initialize the mutex, use the following function interface: 669 670```c 671rt_err_t rt_mutex_init (rt_mutex_t mutex, const char* name, rt_uint8_t flag); 672``` 673 674When using this function interface, you need to specify the handle of the mutex object (that is, the pointer to the mutex control block), the mutex name, and the mutex flag. The mutex flag can be the flags mentioned in the creation of mutex function above. The following table describes the input parameters and return values for this function: 675 676Input parameters and return values of rt_mutex_init() 677 678|Parameters|Description | 679|----------|-------------------------------------------------------------------| 680| mutex | The handle of the mutex object, which is provided by the user and points to the memory block of the mutex object | 681| name | Mutex name | 682| flag | Mutex flag, which can take the following values: RT_IPC_FLAG_FIFO or RT_IPC_FLAG_PRIO | 683|**Return**| —— | 684| RT_EOK | Initialization successful | 685 686For statically initialized muex, detaching mutex means to remove the mutex object from the kernel object manager. To detach the mutex, use the following function interface: 687 688```c 689rt_err_t rt_mutex_detach (rt_mutex_t mutex); 690``` 691 692After using this function interface, the kernel wakes up all threads suspended on the mutex (the return value of the thread is -RT_ERROR), and then the system detaches the mutex from the kernel object manager. The following table describes the input parameters and return values for this function: 693 694Input parameters and return values for rt_mutex_detach() 695 696|Parameters|Description | 697|----------|------------------| 698| mutex | The handle of the mutex object | 699|**Return**| —— | 700| RT_EOK | Successful | 701 702### Obtain Mutex 703 704Once the thread obtains the mutex, the thread has ownership of the mutex, that is, a mutex can only be held by one thread at a time. To obtain the mutex, use the following function interface: 705 706```c 707rt_err_t rt_mutex_take (rt_mutex_t mutex, rt_int32_t time); 708``` 709 710If the mutex is not controlled by another thread, the thread requesting the mutex will successfully obtain the mutex. If the mutex is already controlled by the current thread, then add one 1 to the number of holds for the mutex, and the current thread will not be suspended to wait. If the mutex is already occupied by another thread, the current thread suspends and waits on the mutex until another thread releases it or until the specified timeout elapses. The following table describes the input parameters and return values for this function: 711 712Input parameters and return values of rt_mutex_take() 713 714|**Parameters** |Description | 715|---------------|------------------| 716| mutex | The handle of the mutex object | 717| time | Specified waiting time | 718|**Return** | —— | 719| RT_EOK | Successfully obtained mutex | 720| \-RT_ETIMEOUT | Timeout | 721| \-RT_ERROR | Failed to obtain | 722 723### Release Mutex 724 725When a thread completes the access to a mutually exclusive resource, it should release the mutex it occupies as soon as possible, so that other threads can obtain the mutex in time. To release the mutex, use the following function interface: 726 727```c 728rt_err_t rt_mutex_release(rt_mutex_t mutex); 729``` 730 731When using this function interface, only threads that already have control of the mutex can release it. Each time the mutex is released, its holding count is reduced by one. When the mutex's holding count is zero (that is, the holding thread has released all holding operations), it becomes available, threads waiting on the semaphore is awaken. If the thread's priority is increased by the mutex, then when the mutex is released, the thread reverts to the priority level before holding the mutex. The following table describes the input parameters and return values for this function: 732 733Input parameters and return values of rt_mutex_release() 734 735|**Parameters**|**Description** | 736|----------|------------------| 737| mutex | The handle of the mutex object | 738|**Return**| —— | 739| RT_EOK | Success | 740 741## Mutex Application Sample 742 743This is a mutex application routine, and a mutex lock is a way to protect shared resources. When a thread has the mutex lock, it can protect shared resources from being destroyed by other threads. The following example can be used to illustrate. There are two threads: thread 1 and thread 2, thread 1 adds 1 to each of the two numbers; thread 2 also adds 1 to each of the two numbers. mutex is used to ensure that the operation of the thread changing the values of the 2 numbers is not interrupted. As shown in the following code: 744 745Mutex routine 746 747```c 748#include <rtthread.h> 749 750#define THREAD_PRIORITY 8 751#define THREAD_TIMESLICE 5 752 753/* Pointer to the mutex */ 754static rt_mutex_t dynamic_mutex = RT_NULL; 755static rt_uint8_t number1,number2 = 0; 756 757ALIGN(RT_ALIGN_SIZE) 758static char thread1_stack[1024]; 759static struct rt_thread thread1; 760static void rt_thread_entry1(void *parameter) 761{ 762 while(1) 763 { 764 /* After thread 1 obtains the mutex, it adds 1 to number1 and number2, and then releases the mutex. */ 765 rt_mutex_take(dynamic_mutex, RT_WAITING_FOREVER); 766 number1++; 767 rt_thread_mdelay(10); 768 number2++; 769 rt_mutex_release(dynamic_mutex); 770 } 771} 772 773ALIGN(RT_ALIGN_SIZE) 774static char thread2_stack[1024]; 775static struct rt_thread thread2; 776static void rt_thread_entry2(void *parameter) 777{ 778 while(1) 779 { 780 /* After thread 2 obtains the mutex, check whether the values of number1 and number2 are the same. If they are the same, it means the mutex succeesfully played the role of a lock. */ 781 rt_mutex_take(dynamic_mutex, RT_WAITING_FOREVER); 782 if(number1 != number2) 783 { 784 rt_kprintf("not protect.number1 = %d, number2 = %d \n",number1 ,number2); 785 } 786 else 787 { 788 rt_kprintf("mutex protect ,number1 = number2 is %d\n",number1); 789 } 790 791 number1++; 792 number2++; 793 rt_mutex_release(dynamic_mutex); 794 795 if(number1>=50) 796 return; 797 } 798} 799 800/* Initialization of the mutex sample */ 801int mutex_sample(void) 802{ 803 /* Create a dynamic mutex */ 804 dynamic_mutex = rt_mutex_create("dmutex", RT_IPC_FLAG_FIFO); 805 if (dynamic_mutex == RT_NULL) 806 { 807 rt_kprintf("create dynamic mutex failed.\n"); 808 return -1; 809 } 810 811 rt_thread_init(&thread1, 812 "thread1", 813 rt_thread_entry1, 814 RT_NULL, 815 &thread1_stack[0], 816 sizeof(thread1_stack), 817 THREAD_PRIORITY, THREAD_TIMESLICE); 818 rt_thread_startup(&thread1); 819 820 rt_thread_init(&thread2, 821 "thread2", 822 rt_thread_entry2, 823 RT_NULL, 824 &thread2_stack[0], 825 sizeof(thread2_stack), 826 THREAD_PRIORITY-1, THREAD_TIMESLICE); 827 rt_thread_startup(&thread2); 828 return 0; 829} 830 831/* Export to the MSH command list */ 832MSH_CMD_EXPORT(mutex_sample, mutex sample); 833``` 834 835Both thread 1 and thread 2 use mutexes to protect the operation on the 2 numbers (if the obtain and release mutex statements in thread 1 are commented out, thread 1 will no longer protect number), the simulation results are as follows : 836 837``` 838\ | / 839- RT - Thread Operating System 840 / | \ 3.1.0 build Aug 24 2018 841 2006 - 2018 Copyright by rt-thread team 842msh >mutex_sample 843msh >mutex protect ,number1 = number2 is 1 844mutex protect ,number1 = number2 is 2 845mutex protect ,number1 = number2 is 3 846mutex protect ,number1 = number2 is 4 847… 848mutex protect ,number1 = number2 is 48 849mutex protect ,number1 = number2 is 49 850``` 851 852Threads use mutexes to protect the operation on the two numbers, keeping the number values consistent. 853 854Another example of a mutex is shown in the following code. This example creates three dynamic threads to check if the priority level of the thread holding the mutex is adjusted to the highest priority level among the waiting threads. 855 856Prevent priority inversion routine 857 858```c 859#include <rtthread.h> 860 861/* Pointer to the thread control block */ 862static rt_thread_t tid1 = RT_NULL; 863static rt_thread_t tid2 = RT_NULL; 864static rt_thread_t tid3 = RT_NULL; 865static rt_mutex_t mutex = RT_NULL; 866 867 868#define THREAD_PRIORITY 10 869#define THREAD_STACK_SIZE 512 870#define THREAD_TIMESLICE 5 871 872/* Thread 1 Entry */ 873static void thread1_entry(void *parameter) 874{ 875 /* Let the low priority thread run first */ 876 rt_thread_mdelay(100); 877 878 /* At this point, thread3 holds the mutex and thread2 is waiting to hold the mutex */ 879 880 /* Check the priority level of thread2 and thread3 */ 881 if (tid2->current_priority != tid3->current_priority) 882 { 883 /* The priority is different, the test fails */ 884 rt_kprintf("the priority of thread2 is: %d\n", tid2->current_priority); 885 rt_kprintf("the priority of thread3 is: %d\n", tid3->current_priority); 886 rt_kprintf("test failed.\n"); 887 return; 888 } 889 else 890 { 891 rt_kprintf("the priority of thread2 is: %d\n", tid2->current_priority); 892 rt_kprintf("the priority of thread3 is: %d\n", tid3->current_priority); 893 rt_kprintf("test OK.\n"); 894 } 895} 896 897/* Thread 2 Entry */ 898static void thread2_entry(void *parameter) 899{ 900 rt_err_t result; 901 902 rt_kprintf("the priority of thread2 is: %d\n", tid2->current_priority); 903 904 /* Let the low-priority thread run first */ 905 rt_thread_mdelay(50); 906 907 /* 908 * Trying to hold a mutex lock. At this point, thread 3 has the mutex lock, so the priority level of thread 3 should be raised 909 * to the same level of priority as thread 2 910 */ 911 result = rt_mutex_take(mutex, RT_WAITING_FOREVER); 912 913 if (result == RT_EOK) 914 { 915 /* Release mutex lock */ 916 rt_mutex_release(mutex); 917 } 918} 919 920/* Thread 3 Entry */ 921static void thread3_entry(void *parameter) 922{ 923 rt_tick_t tick; 924 rt_err_t result; 925 926 rt_kprintf("the priority of thread3 is: %d\n", tid3->current_priority); 927 928 result = rt_mutex_take(mutex, RT_WAITING_FOREVER); 929 if (result != RT_EOK) 930 { 931 rt_kprintf("thread3 take a mutex, failed.\n"); 932 } 933 934 /* Operate a long cycle, 500ms */ 935 tick = rt_tick_get(); 936 while (rt_tick_get() - tick < (RT_TICK_PER_SECOND / 2)) ; 937 938 rt_mutex_release(mutex); 939} 940 941int pri_inversion(void) 942{ 943 /* Created a mutex lock */ 944 mutex = rt_mutex_create("mutex", RT_IPC_FLAG_FIFO); 945 if (mutex == RT_NULL) 946 { 947 rt_kprintf("create dynamic mutex failed.\n"); 948 return -1; 949 } 950 951 /* Create thread 1*/ 952 tid1 = rt_thread_create("thread1", 953 thread1_entry, 954 RT_NULL, 955 THREAD_STACK_SIZE, 956 THREAD_PRIORITY - 1, THREAD_TIMESLICE); 957 if (tid1 != RT_NULL) 958 rt_thread_startup(tid1); 959 960 /* Create thread 2 */ 961 tid2 = rt_thread_create("thread2", 962 thread2_entry, 963 RT_NULL, 964 THREAD_STACK_SIZE, 965 THREAD_PRIORITY, THREAD_TIMESLICE); 966 if (tid2 != RT_NULL) 967 rt_thread_startup(tid2); 968 969 /* Create thread 3 */ 970 tid3 = rt_thread_create("thread3", 971 thread3_entry, 972 RT_NULL, 973 THREAD_STACK_SIZE, 974 THREAD_PRIORITY + 1, THREAD_TIMESLICE); 975 if (tid3 != RT_NULL) 976 rt_thread_startup(tid3); 977 978 return 0; 979} 980 981/* Export to the msh command list */ 982MSH_CMD_EXPORT(pri_inversion, prio_inversion sample); 983``` 984 985The simulation results are as follows: 986 987``` 988\ | / 989- RT - Thread Operating System 990 / | \ 3.1.0 build Aug 27 2018 991 2006 - 2018 Copyright by rt-thread team 992msh >pri_inversion 993the priority of thread2 is: 10 994the priority of thread3 is: 11 995the priority of thread2 is: 10 996the priority of thread3 is: 10 997test OK. 998``` 999 1000The routine demonstrates how to use the mutex. Thread 3 holds the mutex first, and then thread 2 tries to hold the mutex, at which point thread 3's priority is raised to the same level as thread 2. 1001 1002>It is important to remember that mutexes cannot be used in interrupt service routines. 1003 1004## Occasions to Use Mutex 1005 1006The use of a mutex is relatively simple because it is a type of semaphore and it exists in the form of a lock. At the time of initialization, the mutex is always unlocked, and when it is held by the thread, it immediately becomes locked. Mutex is more suitable for: 1007 1008(1) When a thread holds a mutex multiple times. This can avoid the problem of deadlock caused by multiple recursive holdings of the same thread. 1009 1010(2) A situation in which priority inversion may occur due to multi-thread synchronization. 1011 1012# Event 1013 1014Event set is also one of the mechanisms for synchronization between threads. An event set can contain multiple events. Event set can be used to complete one-to-many, many-to-many thread synchronization. Let's take taking bus as an example to illustrate event. There may be the following situations when waiting for a bus at a bus stop: 1015 1016①P1 is taking a bus to a certain place, only one type of bus can reach the destination. P1 can leave for the destination once that bus arrives. 1017 1018②P1 is taking a bus to a certain place, 3 types of buses can reach the destination. P1 can leave for the destination once any 1 of the 3 types of bus arrives. 1019 1020③P1 is traveling with P2 to a certain place together, P1 can't leave for the destination unless two conditions are met. These two conditions are “P2 arrives at the bus stop” and “bus arrives at the bus stop”. 1021 1022Here, P1 leaving for a certain place can be regarded as a thread, and “bus arrives at the bus stop” and “P2 arrives at the bus stop” are regarded as the occurrence of events. Situation ① is a specific event to wakes up the thread; situation ② is any single event to wake up the thread; situation ③ is when multiple events must occur simultaneously to wake up the thread. 1023 1024## Event Set Working Mechanism 1025 1026The event set is mainly used for synchronization between threads. Unlike the semaphore, it can achieve one-to-many, many-to-many synchronization. That is, the relationship between a thread and multiple events can be set as follows: any one of the events wakes up the thread, or several events wake up the thread for subsequent processing; likewise, the event can be multiple threads to synchronize multiple events. This collection of multiple events can be represented by a 32-bit unsigned integer variable, each bit of the variable representing an event, and the thread associates one or more events by "logical AND" or "logical OR" to form event combination. The "logical OR" of an event is also called independent synchronization, which means that the thread is synchronized with one of the events; the event "logical AND" is also called associative synchronization, which means that the thread is synchronized with several events. 1027 1028The event set defined by RT-Thread has the following characteristics: 1029 10301) Events are related to threads only, and events are independent of each other: each thread can have 32 event flags, recorded with a 32-bit unsigned integer, each bit representing an event; 1031 10322) Event is only used for synchronization and does not provide data transfer functionality; 1033 10343) Events are not queuing, that is, sending the same event to the thread multiple times (if the thread has not had time read it), the effect is equivalent to sending only once. 1035 1036In RT-Thread, each thread has an event information tag with three attributes. They are RT_EVENT_FLAG_AND (logical AND), RT_EVENT_FLAG_OR (logical OR), and RT_EVENT_FLAG_CLEAR (clear flag). When the thread waits for event synchronization, it can determine whether the currently received event satisfies the synchronization condition by 32 event flags and this event information flag. 1037 1038 1039 1040As shown in the figure above, the first and 30th bits of the event flag of thread #1 are set. If the event information flag is set to logical AND, it means that thread #1 will be triggered to wake up only after both event 1 and event 30 occur. If the event information flag is set to logical OR, the occurrence of either event 1 or event 30 will trigger to wake up thread #1. If the message flag also sets the clear flag bit, this means event 1 and event 30 will be automatically cleared to zero when thread #1 wakes up, otherwise the event flag will still be present (set to 1). 1041 1042## Event Set Control Block 1043 1044In RT-Thread, event set control block is a data structure used by the operating system to manage events, represented by the structure struct rt_event. Another C expression, rt_event_t, represents the handle of the event set, and the implementation in C language is a pointer to the event set control block. See the following code for a detailed definition of the event set control block structure: 1045 1046```c 1047struct rt_event 1048{ 1049 struct rt_ipc_object parent; /* Inherited from the ipc_object class */ 1050 1051 /* The set of events, each bit represents 1 event, the value of the bit can mark whether an event occurs */ 1052 rt_uint32_t set; 1053}; 1054/* rt_event_t is the pointer type poniting to the event structure */ 1055typedef struct rt_event* rt_event_t; 1056``` 1057 1058rt_event object is derived from rt_ipc_object and is managed by the IPC container. 1059 1060## Management of Event Sets 1061 1062Event set control block contains important parameters related to the event set and plays an important role in the implementation of the event set function. The event set related interfaces are as shown in the following figure. The operations on an event set include: create/initiate event sets, send events, receive events, and delete/detach event sets. 1063 1064 1065 1066### Create and Delete Event Set 1067 1068When creating an event set, the kernel first creates an event set control block, and then performs basic initialization on the event set control block. The event set is created using the following function interface: 1069 1070```c 1071rt_event_t rt_event_create(const char* name, rt_uint8_t flag); 1072``` 1073 1074When the function interface is called, the system allocates the event set object from the object manager, initializes the object, and then initializes the parent class IPC object. The following table describes the input parameters and return values for this function: 1075 1076 Input parameters and return values for rt_event_create() 1077 1078|**Parameters** |**Description** | 1079|----------------|---------------------------------------------------------------------| 1080| name | Name of the event set. | 1081| flag | The flag of the event set, which can take the following values: RT_IPC_FLAG_FIFO or RT_IPC_FLAG_PRIO | 1082|**Return** | —— | 1083| RT_NULL | Creation failed. | 1084| Handle of the event object | Creation successful | 1085 1086When the system no longer uses the event set object created by rt_event_create(), the system resource is released by deleting the event set object control block. To delete an event set you can use the following function interface: 1087 1088```c 1089rt_err_t rt_event_delete(rt_event_t event); 1090``` 1091 1092When you call rt_event_delete function to delete an event set object, you should ensure that the event set is no longer used. All threads that are suspended on the event set will be awaken before the deletion (the return value of the thread is -RT_ERROR), and then the memory block occupied by the event set object is released. The following table describes the input parameters and return values for this function: 1093 1094Input parameters and return values of rt_event_delete() 1095 1096|**Parameters**|**Description** | 1097|----------|------------------| 1098| event | The handle of the event set object | 1099|**Return**| —— | 1100| RT_EOK | Success | 1101 1102### Initialize and Detach Event Set 1103 1104The memory of a static event set object is allocated by the compiler during system compilation, and is usually placed in a read-write data segment or an uninitialized data segment. Before using a static event set object, you need to initialize it first. The initialization event set uses the following function interface: 1105 1106```c 1107rt_err_t rt_event_init(rt_event_t event, const char* name, rt_uint8_t flag); 1108``` 1109 1110When the interface is called, you need to specify the handle of the static event set object (that is, the pointer pointing to the event set control block), and then the system will initialize the event set object and add it to the system object container for management. The following table describes the input parameters and return values for this function: 1111 1112Input parameters and return values of rt_event_init() 1113 1114|**Parameters**|**Description** | 1115|----------|---------------------------------------------------------------------| 1116| event | The handle of the event set object | 1117| name | The name of the event set | 1118| flag | The flag of the event set, which can take the following values: RT_IPC_FLAG_FIFO or RT_IPC_FLAG_PRIO | 1119|**Return**| —— | 1120| RT_EOK | Success | 1121 1122When the system no longer uses the event set object initialized by rt_event_init(), the system resources are released by detaching the event set object control block. Detaching event set means to detach the event set object from the kernel object manager. To detach an event set, use the following function interface: 1123 1124```c 1125rt_err_t rt_event_detach(rt_event_t event); 1126``` 1127 1128When the user calls this function, the system first wakes up all the threads suspended on the event set wait queue (the return value of the thread is -RT_ERROR), and then detaches the event set from the kernel object manager. The following table describes the input parameters and return values for this function: 1129 1130Input parameters and return values for rt_event_detach() 1131 1132|**Parameters**|**Description** | 1133|----------|------------------| 1134| event | The handle of the event set object | 1135|**Return**| —— | 1136| RT_EOK | Success | 1137 1138### Send Event 1139 1140The send event function can send one or more events in the event set as follows: 1141 1142```c 1143rt_err_t rt_event_send(rt_event_t event, rt_uint32_t set); 1144``` 1145 1146When using the function interface, the event flag value of the event set object is set by the event flag specified by the parameter set, and then the waiting thread linked list transversely waiting for the event set objects is used to determine whether there is a thread has the event activation requirement that matches the current event flag value. If there is, wake up the thread. The following table describes the input parameters and return values for this function: 1147 1148Input parameters and return values of rt_event_send() 1149 1150|**Parameters**|**Description** | 1151|----------|------------------------------| 1152| event | The handle of the event set object | 1153| set | The flag value of one or more events sent | 1154|**Return**| —— | 1155| RT_EOK | Success | 1156 1157### Receive Event 1158 1159The kernel uses a 32-bit unsigned integer to identify the event set, each bit represents an event, so an event set object can wait to receive 32 events at the same time, and the kernel can decide how to activate the thread by specifying the parameter "logical AND" or "logical OR". Using the "logical AND" parameter indicates that the thread is only activated when all waiting events occur, and using the "logical OR" parameter means that the thread is activated as soon as one waiting event occurs. To receive events, use the following function interface: 1160 1161```c 1162rt_err_t rt_event_recv(rt_event_t event, 1163 rt_uint32_t set, 1164 rt_uint8_t option, 1165 rt_int32_t timeout, 1166 rt_uint32_t* recved); 1167``` 1168 1169If it has already occurred, it determines whether to reset the corresponding flag of the event according to whether RT_EVENT_FLAG_CLEAR is set on the parameter option. If it has already occurred, it determines whether to reset the corresponding flag of the event according to whether RT_EVENT_FLAG_CLEAR is set on the parameter option, then return (where the recved parameter returns to the received event); if it has not occurred, fill the waiting set and option parameters into the structure of the thread itself, then suspend the thread on this event until its waiting event satisfies the condition or until the specified timeout elapses. If the timeout is set to zero, it means that when the event to be accepted by the thread does not meet the requirements, it does not wait, but returns directly-RT_ETIMEOUT. The following table describes the input parameters and return values for this function: 1170 1171Input parameters and return values of rt_event_recv() 1172 1173|**Parameters** |**Description** | 1174|---------------|----------------------| 1175| event | The handle of the event set object | 1176| set | Receive events of interest to the thread | 1177| option | Receive options | 1178| timeout | Timeout | 1179| recved | Point to the received event | 1180|**Return** | —— | 1181| RT_EOK | Successful | 1182| \-RT_ETIMEOUT | Timeout | 1183| \-RT_ERROR | Error | 1184 1185The value of option can be: 1186 1187```c 1188/* Select AND or OR to receive events */ 1189RT_EVENT_FLAG_OR 1190RT_EVENT_FLAG_AND 1191 1192/* Choose to clear reset event flag */ 1193RT_EVENT_FLAG_CLEAR 1194``` 1195 1196## Event Set Application Sample 1197 1198This is the application routine for the event set, which initializes an event set, two threads. One thread waits for an event of interest to it, and another thread sends an event, as shown in code listing 6-5: 1199 1200Event set usage routine 1201 1202```c 1203#include <rtthread.h> 1204 1205#define THREAD_PRIORITY 9 1206#define THREAD_TIMESLICE 5 1207 1208#define EVENT_FLAG3 (1 << 3) 1209#define EVENT_FLAG5 (1 << 5) 1210 1211/* Event control block */ 1212static struct rt_event event; 1213 1214ALIGN(RT_ALIGN_SIZE) 1215static char thread1_stack[1024]; 1216static struct rt_thread thread1; 1217 1218/* Thread 1 entry function*/ 1219static void thread1_recv_event(void *param) 1220{ 1221 rt_uint32_t e; 1222 1223 /* The first time the event is received, either event 3 or event 5 can trigger thread 1, clearing the event flag after receiving */ 1224 if (rt_event_recv(&event, (EVENT_FLAG3 | EVENT_FLAG5), 1225 RT_EVENT_FLAG_OR | RT_EVENT_FLAG_CLEAR, 1226 RT_WAITING_FOREVER, &e) == RT_EOK) 1227 { 1228 rt_kprintf("thread1: OR recv event 0x%x\n", e); 1229 } 1230 1231 rt_kprintf("thread1: delay 1s to prepare the second event\n"); 1232 rt_thread_mdelay(1000); 1233 1234 /* The second time the event is received, both event 3 and event 5 can trigger thread 1, clearing the event flag after receiving */ 1235 if (rt_event_recv(&event, (EVENT_FLAG3 | EVENT_FLAG5), 1236 RT_EVENT_FLAG_AND | RT_EVENT_FLAG_CLEAR, 1237 RT_WAITING_FOREVER, &e) == RT_EOK) 1238 { 1239 rt_kprintf("thread1: AND recv event 0x%x\n", e); 1240 } 1241 rt_kprintf("thread1 leave.\n"); 1242} 1243 1244 1245ALIGN(RT_ALIGN_SIZE) 1246static char thread2_stack[1024]; 1247static struct rt_thread thread2; 1248 1249/* Thread 2 Entry */ 1250static void thread2_send_event(void *param) 1251{ 1252 rt_kprintf("thread2: send event3\n"); 1253 rt_event_send(&event, EVENT_FLAG3); 1254 rt_thread_mdelay(200); 1255 1256 rt_kprintf("thread2: send event5\n"); 1257 rt_event_send(&event, EVENT_FLAG5); 1258 rt_thread_mdelay(200); 1259 1260 rt_kprintf("thread2: send event3\n"); 1261 rt_event_send(&event, EVENT_FLAG3); 1262 rt_kprintf("thread2 leave.\n"); 1263} 1264 1265int event_sample(void) 1266{ 1267 rt_err_t result; 1268 1269 /* Initialize event object */ 1270 result = rt_event_init(&event, "event", RT_IPC_FLAG_FIFO); 1271 if (result != RT_EOK) 1272 { 1273 rt_kprintf("init event failed.\n"); 1274 return -1; 1275 } 1276 1277 rt_thread_init(&thread1, 1278 "thread1", 1279 thread1_recv_event, 1280 RT_NULL, 1281 &thread1_stack[0], 1282 sizeof(thread1_stack), 1283 THREAD_PRIORITY - 1, THREAD_TIMESLICE); 1284 rt_thread_startup(&thread1); 1285 1286 rt_thread_init(&thread2, 1287 "thread2", 1288 thread2_send_event, 1289 RT_NULL, 1290 &thread2_stack[0], 1291 sizeof(thread2_stack), 1292 THREAD_PRIORITY, THREAD_TIMESLICE); 1293 rt_thread_startup(&thread2); 1294 1295 return 0; 1296} 1297 1298/* Export to the msh command list */ 1299MSH_CMD_EXPORT(event_sample, event sample); 1300``` 1301 1302The simulation results are as follows: 1303 1304```c 1305 \ | / 1306- RT - Thread Operating System 1307 / | \ 3.1.0 build Aug 24 2018 1308 2006 - 2018 Copyright by rt-thread team 1309msh >event_sample 1310thread2: send event3 1311thread1: OR recv event 0x8 1312thread1: delay 1s to prepare the second event 1313msh >thread2: send event5 1314thread2: send event3 1315thread2 leave. 1316thread1: AND recv event 0x28 1317thread1 leave. 1318``` 1319 1320The routine demonstrates how to use the event set. Thread 1 receives events twice before and after, using the "logical OR" and "logical AND" respectively. 1321 1322## Occasions to Use Event Set 1323 1324Event sets can be used in a variety of situations, and it can replace semaphores to some extent for inter-thread synchronization. A thread or interrupt service routine sends an event to the event set object, and the waiting thread is awaken and the corresponding event is processed. However, unlike semaphore, the event transmission operation is not cumulative until the event is cleared, and the release actions of semaphore are cumulative. Another feature of the event is that the receiving thread can wait for multiple events, meaning multiple events correspond to one thread or multiple threads. At the same time, according to thread waiting parameters, you can choose between a "logical OR" trigger or a "logical AND" trigger. This feature is not available for semaphores, etc. The semaphore can only recognize a single release action, and cannot wait for multiple types of release at the same time. The following figure shows the multi-event receiving diagram: 1325 1326 1327 1328An event set contains 32 events, and a particular thread only waits for and receives events it is interested in. It can be a thread waiting for the arrival of multiple events (threads 1, 2 are waiting for multiple events, logical "and" or logical "or" can be used to trigger the thread in events), or multiple threads waiting for an event to arrive (event 25). When there are events of interest to them occur, the thread will be awaken and subsequent processing actions will be taken. 1329 1330 1331