1@page page_memory_management Memory Management 2 3In a computing system, there are usually two types of memory space: internal memory space and external memory space. Internal memory can be quickly accessed, its contents can be read to and changed and only an address is required. The contents of internal memory are deleted after each power off. It is what would usually be called RAM (Random Access Memory) and is analogous to the RAM in a desktop computer. On the other hand, external memory has relatively fixed contents, retains data even after power off. It is usually called ROM (Read-Only Memory) and is analogous to the hard disk in a desktop computer. 4 5In a computer system, variables and intermediate data are generally stored in RAM, and they are only transferred from RAM to CPU for calculation when actually used. The memory size required by some data needs to be determined according to the actual situation during the running of the program, which requires the system to have the ability to dynamically manage the memory space.The user applies to the system when he needs a block of memory space, then the system selects a suitable memory space to allocate to the user. After the user finishes using it, the memory space is released back to the system which enables the system to recycle the memory space. 6 7This chapter introduces two kinds of memory management methods in RT-Thread, namely dynamic memory heap management and static memory pool management. After studying this chapter, readers will understand the memory management principle and usage of RT-Thread. 8 9# Memory Management Functional Features 10 11Because time requirements are very strict in real-time systems, memory management is often much more demanding than in general-purpose operating systems: 12 131) Time for allocating memory must be deterministic. The general memory management algorithm needs to find a free memory block that is compatible with the data according to the length of the data to be stored, and then store the data therein. The time it takes to find such a free block of memory is uncertain, but for real-time systems, this is unacceptable. Real-time systems require that the allocation process of the memory block is completed within a predictable certain time, otherwise the response of a real-time task to an external event will become indeterminable. 14 152) As memory is constantly being allocated and released, the entire memory area will fragment. This means that instead of one large continuous block, the memory area will have small memory blocks taken up inbetween, limiting the largest possible contiguous block which the code can ask for. For general-purpose systems, this issue can be solved by rebooting the system (once every month or once per few months). However, this solution is unacceptable for embedded systems that need to work continuously in the field work all the year round. 16 173) The resource environment of embedded system is also different. Some systems have relatively tight resources, only tens of kilobytes of memory are available for allocation, while some systems have several megabytes of memory. This makes choosing an efficient memory allocation algorithm for these different systems more complicated. 18 19The RT-Thread operating system provides different memory allocation management algorithms for memory management according to different upper layer applications and system resources. Generally, it can be divided into two categories: memory heap management and memory pool management. Memory heap management is divided into three cases according to specific memory devices: 20 21The first is allocation management for small memory blocks (small memory management algorithm); 22 23The second is allocation management for large memory blocks (slab management algorithm); 24 25The third is allocation management for multiple memory heaps (memheap management algorithm) 26 27# Memory Heap Management 28 29Memory heap management is used to manage a contiguous memory space. We introduced the memory distribution of RT-Thread in chapter "Kernel Basics". As shown in the following figure, RT-Thread uses the space at "the end of the ZI segment" to the end of the memory as the memory heap. 30 31 32 33If current resources allow it, the memory heap can allocate memory blocks of any size according to the needs of the user. When the user does not need to use these memory blocks, they can be released back to the heap for other applications to allocate and use. In order to meet different needs, the RT-Thread system provides different memory management algorithms, namely small memory management algorithm, slab management algorithm and memheap management algorithm. 34 35The small memory management algorithm is mainly for system with less resources and with less than 2MB of memory. The slab memory management algorithm mainly provides a fast algorithm similar to multiple memory pool management algorithms when the system resources are rich. In addition to the above, RT-Thread also has a management algorithm for a multi-memory heap, namely the memheap management algorithm. The memheap management algorithm is suitable for where there are multiple memory heaps in the system. It can “paste” multiple memories together to form a large memory heap, which is very easy to use for users. 36 37Any or none of these memory heap management algorithms can be chosen when the system is running and all algorithms provide the same API interface to the application. 38 39>Because the memory heap manager needs to meet the security allocation in multi-threaded conditions, which means mutual exclusion between multiple threads needs to be taken into consideration, please do not allocate or release dynamic memory blocks in interrupt service routines, as that may result in the current context being suspended. 40 41## Small Memory Management Algorithm 42 43The small memory management algorithm is a simple memory allocation algorithm. Initially, it is a large piece of memory. When a memory block needs to be allocated, the matching memory block is segmented from the large memory block, and then this matching free memory block is returned to the heap management system. Each memory block contains data head for management use through which the used block and the free block are linked by a doubly linked list. 44 45 46 47**Use this implementation before 4.1.0** 48 49As shown in the following figure: 50 51 52 53Each memory block (whether it is an allocated memory block or a free memory block) contains a data head, including: 54 55**1) magic**: Variable (also called magic number). It will be initialized to 0x1ea0 (that is, the English word heap) which is used to mark this memory block as a memory data block for memory management. The variable is not only used to identify that the data block is a memory data block for memory management. It is also a memory protection word: if this area is overridden, it means that the memory block is illegally overridden (normally only the memory manager will operate on this memory). 56 57**2)used**: Indicates whether the current memory block has been allocated. 58 59The performance of memory management is mainly reflected in the allocation and release of memory. The small memory management algorithm can be embodied by the following examples. 60 61 62 63**4.1.0 and later use this implementation** 64 65As shown in the following figure: 66 67 68 69**Heap Start**: The heap head address stores memory usage information, the heap head address pointer, the heap end address pointer, the minimum heap free address pointer, and the memory size. 70 71Each memory block (whether it is an allocated memory block or a free memory block) contains a data head, including: 72 73**pool_ptr**: Small memory object address. If the last bit of the memory block is 1, it is used. If the last bit of the memory block is 0, it is not used. When used, the structure members of the small memory algorithm can be quickly obtained by calc. 74 75 76 77As shown in the following figure, the free list pointer lfree initially points to a 32-byte block of memory. When the user thread wants to allocate a 64-byte memory block, since the memory block pointed to by this lfree pointer is only 32 bytes and does not meet the requirements, the memory manager will continue to search for the next memory block. When the next memory block with 128 bytes is found, it meets the requirements of the allocation. Because this memory block is large, the allocator will split the memory block, and the remaining memory block (52 bytes) will remain in the lfree linked list, as shown in the following table which is after 64 bytes is allocated. 78 79 80 81 82 83In addition, a 12-byte data head is reserved for `magic, used` information, and linked list nodes before each memory block are allocated. The address returned to the application is actually the address after 12 bytes of this memory block. The 12-byte data head is the part that the user should never use. (Note: The length of the 12-byte data head will be different depending on the alignment of the system). 84 85As for releasing, it is the reverse process, but the allocator will check if the adjacent memory blocks are free, and if they are free, the allocator will merge them into one large free memory block. 86 87## Slab Management Algorithm 88 89RT-Thread's slab allocator is an optimized memory allocation algorithm for embedded systems based on the slab allocator implemented by DragonFly BSD founder Matthew Dillon. The most primitive slab algorithm is Jeff Bonwick's efficient kernel memory allocation algorithm introduced for the Solaris operating system. 90 91RT-Thread's slab allocator implementation mainly removes the object construction and destruction process, and only retains the pure buffered memory pool algorithm. The slab allocator is divided into multiple zones according to the size of the object which can also be seen as having a memory pool for each type of object, as shown in the following figure: 92 93 94 95 96 97A zone is between 32K and 128K bytes in size, and the allocator will automatically adjust based on the heap size when the heap is initialized. The zone in the system includes up to 72 objects, which can allocate up to 16K of memory at a time. It will directly allocate from the page allocator if it exceeds 16K. The size of the memory block allocated on each zone is fixed. Zones that can allocate blocks of the same size are linked in a linked list. The zone linked lists of the 72 objects are managed in an array. (zone_array[]). 98 99Here are the two main operations for the memory allocator: 100 101**(1) Memory Allocation** 102 103Assuming a 32-byte memory is allocated, the slab memory allocator first finds the corresponding zone linked list from the linked list head of zone array in accordance with the 32-byte value. If the linked list is empty, assign a new zone to the page allocator and return the first free block of memory from the zone. If the linked list is not empty, a free block must exist in the first zone node in the zone linked list (otherwise it would not have been placed in the linked list), so then take the corresponding free block. If all free memory blocks in the zone are used after the allocation, the allocator needs to remove this zone node from the linked list. 104 105**(2)Memory Release** 106 107The allocator needs to find the zone node where the memory block is located, and then link the memory block to the zone's free memory block linked list. If the free linked list of the zone indicates that all the memory blocks of the zone have been released, it means that the zone is completely free. The system will release the fully free zone to the page allocator when the number of free zones in the zone linked list reaches a certain number. 108 109## memheap Management Algorithm 110 111The memheap management algorithm is suitable for systems with multiple memory heaps that are not contiguous. Using memheap memory management can simplify the use of multiple memory heaps in the system: when there are multiple memory heaps in the system, the user only needs to initialize multiple needed memheaps during system initialization and turn on the memheap function to attach multiple memheaps (addresses can be discontinuous) for the system's heap allocation. 112 113>The original heap function will be turned off after memheap is turned on. Both can only be selected by turning RT_USING_MEMHEAP_AS_HEAP on or off. 114 115The working mechanism of memheap is shown in the figure below. First, add multiple blocks of memory to the memheap_item linked list to attach. The allocation of a memory block starts with allocating memory from default memory heap. When it can not be allocated, memheap_item linked list is looked up, and an attempt is made to allocate a memory block from another memory heap. This process is opaque to the user, who only sees one memory heap. 116 117 118 119## Memory Heap Configuration and Initialization 120 121When using the memory heap, heap initialization must be done at system initialization, which can be done through the following function interface: 122 123```c 124void rt_system_heap_init(void* begin_addr, void* end_addr); 125``` 126 127This function will use the memory space of the parameters begin_addr, end_addr as a memory heap. The following table describes the input parameters for this function: 128 129Input parameter for rt_system_heap_init() 130 131|**Parameters** |**Description** | 132|------------|--------------------| 133| begin_addr | Start address for heap memory area | 134| end_addr | End address for heap memory area | 135 136When using memheap heap memory, you must initialize the heap memory at system initialization, which can be done through the following function interface: 137 138```c 139rt_err_t rt_memheap_init(struct rt_memheap *memheap, 140 const char *name, 141 void *start_addr, 142 rt_uint32_t size) 143``` 144 145If there are multiple non-contiguous memheaps, the function can be called multiple times to initialize it and join the memheap_item linked list. The following table describes the input parameters and return values for this function: 146 147Input parameters and return values of rt_memheap_init() 148 149|**Parameters** |**Description** | 150|------------|--------------------| 151| memheap | memheap control block | 152| name | The name of the memory heap | 153| start_addr | Heap memory area start address | 154| size | Heap memory size | 155|**Return** | —— | 156| RT_EOK | Successful | 157 158## Memory Heap Management 159 160Operations of the memory heap are as shown in the following figure, including: initialization, application for memory blocks, release of memory. After use, all dynamic memory should be released for future use by other programs. 161 162 163 164### Allocate and Release Memory Block 165 166Allocate a memory block of user-specified size from the memory heap. The function interface is as follows: 167 168```c 169void *rt_malloc(rt_size_t nbytes); 170``` 171 172The rt_malloc function finds a memory block of the appropriate size from the system heap space and returns the available address of the memory block to the user. The following table describes the input parameters and return values for this function: 173 174Input parameters and return values of rt_malloc() 175 176|**Parameters** |**Description** | 177|------------------|------------------------------------| 178| nbytes | The size of the memory block to be allocated, in bytes | 179|**Return** | —— | 180| Allocated memory block address | Successful | 181| RT_NULL | Fail | 182 183After the application uses the memory applied from the memory allocator, it must be released in time, otherwise it will cause a memory leak. The function interface for releasing the memory block is as follows: 184 185```c 186void rt_free (void *ptr); 187``` 188 189The rt_free function will return the to-be-released memory back to the heap manager. When calling this function, user needs to pass the to-be-released pointer of the memory block. If it is a null pointer, it returns directly. The following table describes the input parameters for this function: 190 191Input parameters of rt_free() 192 193|**Parameters**|**Description** | 194|----------|--------------------| 195| ptr | to-be-released memory block pointer | 196 197### Re-allocate Memory Block 198 199Re-allocating the size of the memory block (increase or decrease) based on the allocated memory block can be done through the following function interface: 200 201```c 202void *rt_realloc(void *rmem, rt_size_t newsize); 203``` 204 205When the memory block is re-allocated, the original memory block data remains the same (in the case of reduction, the subsequent data is automatically truncated). The following table describes the input parameters and return values for this function: 206 207Input parameters and return values of rt_realloc() 208 209|**Parameters** |**Description** | 210|----------------------|--------------------| 211| rmem | Point to the allocated memory block | 212| newsize | Re-allocated memory size | 213|**Return** | —— | 214| Re-allocated memory block address | Successful | 215 216### Allocate Multiple Memory Blocks 217 218Allocating multiple memory blocks with contiguous memory addresses from the memory heap can be done through the following function interface: 219 220```c 221 void *rt_calloc(rt_size_t count, rt_size_t size); 222``` 223 224The following table describes the input parameters and return values for this function: 225 226Input parameters and return values of rt_calloc() 227 228|**Parameters** |**Description** | 229|----------------------------|---------------------------------------------| 230| count | Number of memory block | 231| size | Size of memory block | 232|**Return** | —— | 233| Pointer pointing to the first memory block address | Successful, all allocated memory blocks are initialized to zero. | 234| RT_NULL | Allocation failed | 235 236### Set Memory Hook Function 237 238When allocating memory blocks, user can set a hook function. The function interface called is as follows: 239 240```c 241void rt_malloc_sethook(void (*hook)(void *ptr, rt_size_t size)); 242``` 243 244The hook function set will callback after the memory allocation. During the callback, the allocated memory block address and size are passed as input parameters. The following table describes the input parameters for this function: 245 246Input parameters for rt_malloc_sethook() 247 248|**Parameters**|**Description** | 249|----------|--------------| 250| hook | Hook function pointer | 251 252The hook function interface is as follows: 253 254```c 255void hook(void *ptr, rt_size_t size); 256``` 257 258The following table describes the input parameters for the hook function: 259 260Allocate hook function interface parameters 261 262|**Parameters**|**Description** | 263|----------|----------------------| 264| ptr | The allocated memory block pointer | 265| size | The size of the allocated memory block | 266 267When releasing memory, user can set a hook function, the function interface called is as follows: 268 269```c 270void rt_free_sethook(void (*hook)(void *ptr)); 271``` 272 273The hook function set will callback before the memory release is completed. During the callback, the released memory block address is passed in as the entry parameter (the memory block is not released at this time). The following table describes the input parameters for this function: 274 275Input parameters for rt_free_sethook() 276 277|**Parameters**|**Description** | 278|----------|--------------| 279| hook | Hook function pointer | 280 281The hook function interface is as follows: 282 283```c 284void hook(void *ptr); 285``` 286 287The following table describes the input parameters for the hook function: 288 289Input parameters of the hook function 290 291|**Parameters**|**Description** | 292|----------|--------------------| 293| ptr | Memory block pointer to be released | 294 295## Memory Heap Management Application Example 296 297This is an example of a memory heap application. This program creates a dynamic thread that dynamically requests memory and releases it. Each time it apples for more memory, it ends when it can't apply for it, as shown in the following code: 298 299Memory heap management 300 301```c 302#include <rtthread.h> 303 304#define THREAD_PRIORITY 25 305#define THREAD_STACK_SIZE 512 306#define THREAD_TIMESLICE 5 307 308/* thread entry */ 309void thread1_entry(void *parameter) 310{ 311 int i; 312 char *ptr = RT_NULL; /* memory block pointer */ 313 314 for (i = 0; ; i++) 315 { 316 /* memory space for allocating (1 << i) bytes each time */ 317 ptr = rt_malloc(1 << i); 318 319 /* if allocated successfully */ 320 if (ptr != RT_NULL) 321 { 322 rt_kprintf("get memory :%d byte\n", (1 << i)); 323 /* release memory block */ 324 rt_free(ptr); 325 rt_kprintf("free memory :%d byte\n", (1 << i)); 326 ptr = RT_NULL; 327 } 328 else 329 { 330 rt_kprintf("try to get %d byte memory failed!\n", (1 << i)); 331 return; 332 } 333 } 334} 335 336int dynmem_sample(void) 337{ 338 rt_thread_t tid = RT_NULL; 339 340 /* create thread 1 */ 341 tid = rt_thread_create("thread1", 342 thread1_entry, RT_NULL, 343 THREAD_STACK_SIZE, 344 THREAD_PRIORITY, 345 THREAD_TIMESLICE); 346 if (tid != RT_NULL) 347 rt_thread_startup(tid); 348 349 return 0; 350} 351/* Export to the msh command list */ 352MSH_CMD_EXPORT(dynmem_sample, dynmem sample); 353``` 354 355The simulation results are as follows: 356 357``` 358\ | / 359- RT - Thread Operating System 360 / | \ 3.1.0 build Aug 24 2018 361 2006 - 2018 Copyright by rt-thread team 362msh >dynmem_sample 363msh >get memory :1 byte 364free memory :1 byte 365get memory :2 byte 366free memory :2 byte 367… 368get memory :16384 byte 369free memory :16384 byte 370get memory :32768 byte 371free memory :32768 byte 372try to get 65536 byte memory failed! 373``` 374 375The memory is successfully allocated in the routine and the information is printed; when trying to apply 65536 byte, 64KB, of memory, the allocation fails because the total RAM size is only 64K and the available RAM is less than 64K. 376 377# Memory Pool 378 379The memory heap manager can allocate blocks of any size, which is very flexible and convenient. However, it also has obvious shortcomings. Firstly, the allocation efficiency is not high because free memory blocks need to be looked up for each allocation. Secondly, it is easy to generate memory fragmentation. In order to improve the memory allocation efficiency and avoid memory fragmentation, RT-Thread provides another method of memory management: Memory Pool. 380 381Memory pool is a memory allocation method for allocating a large number of small memory blocks of the same size. It can greatly speed up memory allocation and release, and can avoid memory fragmentation as much as possible. In addition, RT-Thread's memory pool allows thread suspend function. When there is no free memory block in the memory pool, the application thread will be suspended until there is a new available memory block in the memory pool, and then the suspended application thread will be awakened. 382 383The thread suspend function of the memory pool is very suitable for scenes that need to be synchronized by memory resources. For example, when playing music, the player thread decodes the music file and then sends it to the sound card driver to drive the hardware to play music. 384 385 386 387As shown in the figure above, when the player thread needs to decode the data, it will request the memory block from the memory pool. If there is no memory block available, the thread will be suspended, otherwise it will obtain the memory block to place the decoded data. 388 389The player thread then writes the memory block containing the decoded data to the sound card abstraction device (the thread will return immediately and continue to decode more data). 390 391After the sound card device is written, the callback function set by the player thread is called to release the written memory block. If the player thread is suspended because there is no memory block in the memory pool available, then it will be awakened to continue to decode. 392 393## Memory Pool Working Mechanism 394 395### Memory Pool Control Block 396 397The memory pool control block is a data structure used by the operating system to manage the memory pool. It stores some information about the memory pool, such as the start address of the data area in the memory pool, the memory block size and the memory block list. It also includes memory blocks, a linked list structure used for the connection between memory blocks, event set of the thread suspended due to the memory block being unavailable, and so on. 398 399In the RT-Thread real-time operating system, the memory pool control block is represented by the structure `struct rt_mempool`. Another C expression, `rt_mp_t`, represents the memory block handle. The implementation in C language is a pointer pointing to the memory pool control block. For details, see the following code: 400 401```c 402struct rt_mempool 403{ 404 struct rt_object parent; 405 406 void *start_address; /* start address of memory pool data area */ 407 rt_size_t size; /* size of memory pool data area */ 408 409 rt_size_t block_size; /* size of memory block */ 410 rt_uint8_t *block_list; /* list of memory block */ 411 412 /* maximum number of memory blocks that can be accommodated in the memory pool data area */ 413 rt_size_t block_total_count; 414 /* number of free memory blocks in the memory pool */ 415 rt_size_t block_free_count; 416 /* list of threads suspended because memory blocks are unavailable */ 417 rt_list_t suspend_thread; 418 /* number of threads suspended because memory blocks are unavailable */ 419 rt_size_t suspend_thread_count; 420}; 421typedef struct rt_mempool* rt_mp_t; 422``` 423 424### Memory Block Allocation Mechanism 425 426When the memory pool is created, it first asks for a large amount of memory from the system. Then it divides the memory into multiple small memory blocks of the same size. The small memory blocks are directly connected by a linked list (this linked list is also called a free linked list). At each allocation, the first memory block is taken from the head of the free linked list and provided to the applicant. As you can see from the figure below, there are multiple memory pools of different sizes allowed in physical memory. Each memory pool is composed of multiple free memory blocks, which are used by the kernel for memory management. When a memory pool object is created, the memory pool object is assigned to a memory pool control block. The parameters of the memory control block include the memory pool name, memory buffer, memory block size, number of blocks, and a queue of threads waiting. 427 428 429 430The kernel is responsible for allocating memory pool control blocks to the memory pool. It also receives the request for allocation of memory blocks from the user thread. When this information is obtained, the kernel can allocate memory for the memory pool from available memory. Once the memory pool is initialized, the size of the memory blocks inside will no longer be available for adjustment. 431 432Each memory pool object consists of the above structure, where suspend_thread forms a list for thread waiting for memory blocks, that is, when there is no memory block available in the memory pool, and the request thread allows waiting, the thread applying for the memory block will suspend on the suspend_thread linked list. 433 434## Memory Pool Management 435 436The memory pool control block is a structure that contains important parameters related to the memory pool and acts as a link between various states of the memory pool. The related interfaces of the memory pool are as shown in the following figure. The operation of the memory pool includes: creating/initializing the memory pool, appling for memory blocks, releasing memory blocks and deleting/detaching memory pools. It needs to noted that not all memory pools will be deleted. The deletion is relegated to the user, but the used memory blocks should be released. 437 438 439 440### Create and Delete Memory Pool 441 442To create a memory pool, a memory pool object is created first and then a memory heap is allocated from the heap. Creating a memory pool is a prerequisite for allocating and releasing memory blocks from the corresponding memory pool. After the memory pool is created, a thread can perform operations like application, release and so on. To create a memory pool, use the following function interface. This function returns a created memory pool object. 443 444```c 445rt_mp_t rt_mp_create(const char* name, 446 rt_size_t block_count, 447 rt_size_t block_size); 448``` 449 450Using this function interface can create a memory pool that matches the size and number of memory blocks required. The creation will be successful if system resources allow it (most importantly memory heap resources). When you create a memory pool, you need to give the memory pool a name. The kernel then requests a memory pool object from the system. Next, a memory buffer calculated from the number and sizes of blocks will be allocated from the memory heap. Then the memory pool object is initialized. Afterwards, the successfully applied memory block buffer is organized into the idle linked list used for allocation. The following table describes the input parameters and return values for this function: 451 452Input parameters and return values for rt_mp_create() 453 454|**Parameters** |**Description** | 455|--------------|--------------------| 456| name | Name of the memory pool | 457| block_count | Number of memory blocks | 458| block_size | Size of memory block | 459|**Return** | —— | 460| Handle of memory pool | Creation of memory pool object successful | 461| RT_NULL | Creation of memory pool failed | 462 463Deleting the memory pool will delete the memory pool object and release the applied memory. Use the following function interface: 464 465```c 466rt_err_t rt_mp_delete(rt_mp_t mp); 467``` 468 469When a memory pool is deleted, all threads waiting on the memory pool object will be awakened (return -RT_ERROR). Then the memory pool data storage area allocated from the memory heap is released, and the memory pool object is deleted. The following table describes the input parameters and return values for this function: 470 471Input parameters and return values of rt_mp_delete() 472 473|**Parameters**|**Description** | 474|----------|-----------------------------------| 475| mp | memory pool object handle | 476|**Return**| —— | 477| RT_EOK | Deletion successful | 478 479### Initialize and Detach Memory Pool 480 481Memory pool initialization is similar to memory pool creation, except that the memory pool initialization is used for static memory management, and the memory pool control block is derived from static objects that the user applies in the system. In addition, unlike memory pool creation, the memory space used by the memory pool object here is a buffer space specified by user. The user passes the pointer of the buffer to the memory pool control block, the rest of the initialization is the same as the creation of the memory pool. The function interface is as follows: 482 483```c 484rt_err_t rt_mp_init(rt_mp_t mp, 485 const char* name, 486 void *start, 487 rt_size_t size, 488 rt_size_t block size); 489``` 490 491When initializing the memory pool, the following arguments need to be passed: the memory pool object that needs to be initialized, the memory space used by the memory pool, the number and sizes of memory blocks managed by the memory pool and a name to the memory pool. This way, the kernel can initialize the memory pool and organize the memory space used by the memory pool into a free block linked list for allocation. The following table describes the input parameters and return values for this function: 492 493Input parameters and return values of rt_mp_init() 494 495|**Parameters** |**Description** | 496|-------------|--------------------| 497| mp | memory pool object | 498| name | memory pool name | 499| start | starting address of memory pool | 500| size | memory pool data area size | 501| block_size | memory pool size | 502|**Return** | —— | 503| RT_EOK | initialization successful | 504| \- RT_ERROR | Fail | 505 506The number of memory pool blocks = size / (block_size + 4-byte, linked list pointer size), the calculation result needs to be rounded (an integer). 507 508For example, if the size of the memory pool data area is set to 4096 bytes, and the memory block size block_size is set to 80 bytes, then the number of memory blocks applied is 4096/(80+4)=48. 509 510Detaching the memory pool means the memory pool object will be detached from the kernel object manager. Use the following function interface to detach the memory pool: 511 512```c 513rt_err_t rt_mp_detach(rt_mp_t mp); 514``` 515 516After using this function interface, the kernel wakes up all threads waiting on the memory pool object and then detaches the memory pool object from the kernel object manager. The following table describes the input parameters and return values for this function: 517 518Input parameters and return values for rt_mp_detach() 519 520|**Parameters**|**Description** | 521|----------|------------| 522| mp | memory pool object | 523|**Return**| —— | 524| RT_EOK | Successful | 525 526### Allocate and Release Memory Block 527 528To allocate a memory block from the specified memory pool, use the following interface: 529 530```c 531void *rt_mp_alloc (rt_mp_t mp, rt_int32_t time); 532``` 533 534The time parameter means the timeout period for applying for allocation of memory blocks. If there is a memory block available in the memory pool, remove a memory block from the free linked list of the memory pool, reduce the number of free blocks and return this memory block; if there are no free memory blocks in the memory pool, determine the timeout time setting: if the timeout period is set to zero, the empty memory block is immediately returned; if the waiting time is greater than zero, the current thread is suspended on the memory pool object until there is free memory block available in the memory pool, or the waiting time elapses. The following table describes the input parameters and return values for this function: 535 536Input parameters and return values of rt_mp_alloc() 537 538|**Parameters** |**Description** | 539|------------------|------------| 540| mp | Memory pool object | 541| time | Timeout | 542|**Return** | —— | 543| Allocated memory block address | Successful | 544| RT_NULL | Fail | 545 546Any memory block must be released after it has been used. Otherwise, memory leaks will occur. The memory block is released using the following interface: 547 548```c 549void rt_mp_free (void *block); 550``` 551 552When using the function interface, firstly, the memory pool object of (or belongs to) the memory block will be calculated by the pointer of the memory block that needs to be released. Secondly, the number of available memory blocks of the memory pool object will be increased. Thirdly, the released memory block to the linked list of free memory blocks will be added. Then, it will be determined whether there is a suspended thread on the memory pool object, if so, the first thread on the suspended thread linked list will be awakened. The following table describes the input parameters for this function: 553 554Input parameters of rt_mp_free() 555 556|**Parameters**|**Description** | 557|----------|------------| 558| block | memory block pointer | 559 560## Memory Pool Application Example 561 562This is a static internal memory pool application routine that creates a static memory pool object and 2 dynamic threads. One thread will try to get the memory block from the memory pool, and the other thread will release the memory block, as shown in the following code: 563 564 Memory pool usage example 565 566```c 567#include <rtthread.h> 568 569static rt_uint8_t *ptr[50]; 570static rt_uint8_t mempool[4096]; 571static struct rt_mempool mp; 572 573#define THREAD_PRIORITY 25 574#define THREAD_STACK_SIZE 512 575#define THREAD_TIMESLICE 5 576 577/* pointer pointing to the thread control block */ 578static rt_thread_t tid1 = RT_NULL; 579static rt_thread_t tid2 = RT_NULL; 580 581/* thread 1 entry */ 582static void thread1_mp_alloc(void *parameter) 583{ 584 int i; 585 for (i = 0 ; i < 50 ; i++) 586 { 587 if (ptr[i] == RT_NULL) 588 { 589 /* Trying to apply for a memory block 50 times, when no memory block is available, 590 thread 1 suspends, thread 2 runs */ 591 ptr[i] = rt_mp_alloc(&mp, RT_WAITING_FOREVER); 592 if (ptr[i] != RT_NULL) 593 rt_kprintf("allocate No.%d\n", i); 594 } 595 } 596} 597 598/* thread 2 entry, thread 2 has a lower priority than Thread 1, so thread 1 should be executed first. */ 599static void thread2_mp_release(void *parameter) 600{ 601 int i; 602 603 rt_kprintf("thread2 try to release block\n"); 604 for (i = 0; i < 50 ; i++) 605 { 606 /* release all successfully allocated memory blocks */ 607 if (ptr[i] != RT_NULL) 608 { 609 rt_kprintf("release block %d\n", i); 610 rt_mp_free(ptr[i]); 611 ptr[i] = RT_NULL; 612 } 613 } 614} 615 616int mempool_sample(void) 617{ 618 int i; 619 for (i = 0; i < 50; i ++) ptr[i] = RT_NULL; 620 621 /* initialize the memory pool object */ 622 rt_mp_init(&mp, "mp1", &mempool[0], sizeof(mempool), 80); 623 624 /* create thread 1: applying for memory pool */ 625 tid1 = rt_thread_create("thread1", thread1_mp_alloc, RT_NULL, 626 THREAD_STACK_SIZE, 627 THREAD_PRIORITY, THREAD_TIMESLICE); 628 if (tid1 != RT_NULL) 629 rt_thread_startup(tid1); 630 631 632 /* create thread 2: release memory pool */ 633 tid2 = rt_thread_create("thread2", thread2_mp_release, RT_NULL, 634 THREAD_STACK_SIZE, 635 THREAD_PRIORITY + 1, THREAD_TIMESLICE); 636 if (tid2 != RT_NULL) 637 rt_thread_startup(tid2); 638 639 return 0; 640} 641 642/* export to the msh command list */ 643MSH_CMD_EXPORT(mempool_sample, mempool sample); 644``` 645 646The simulation results are as follows: 647 648``` 649 \ | / 650- RT - Thread Operating System 651 / | \ 3.1.0 build Aug 24 2018 652 2006 - 2018 Copyright by rt-thread team 653msh >mempool_sample 654msh >allocate No.0 655allocate No.1 656allocate No.2 657allocate No.3 658allocate No.4 659… 660allocate No.46 661allocate No.47 662thread2 try to release block 663release block 0 664allocate No.48 665release block 1 666allocate No.49 667release block 2 668release block 3 669release block 4 670release block 5 671… 672release block 47 673release block 48 674release block 49 675``` 676 677This routine initializes 4096/(80+4) = 48 memory blocks when initializing the memory pool object. 678 6791) After thread 1 applies for 48 memory blocks, the memory block has been used up and needs to be released elsewhere to be applied again; but at this time, thread 1 has applied for another one in the same way, because it cannot be allocated, so thread 1 suspends; 680 6812) Thread 2 starts to execute the operation of releasing the memory. When thread 2 releases a memory block, it means there is a memory block that is free. Wake up thread 1 to apply for memory, and then apply again after the application is successful, thread 1 suspends again, and repeats this process; 682 6833) Thread 2 continues to release the remaining memory blocks, release is complete. 684 685