1# Memory and resource usage
2
3This file contains information about memory and resource management in Zircon,
4and talks about ways to examine process and system memory usage.
5
6*** note
7**TODO**(dbort): Talk about the relationship between address spaces,
8[VMARs](objects/vm_address_region.md), [mappings](syscalls/vmar_map.md), and
9[VMOs](objects/vm_object.md)
10***
11
12[TOC]
13
14## Userspace memory
15
16Which processes are using all of the memory?
17
18### Dump total process memory usage
19
20Use the `ps` tool:
21
22```
23$ ps
24TASK           PSS PRIVATE  SHARED NAME
25j:1028       32.9M   32.8M         root
26  p:1043   1386.3k   1384k     28k bin/devmgr
27  j:1082     30.0M   30.0M         zircon-drivers
28    p:1209  774.3k    772k     28k /boot/bin/acpisvc
29    p:1565  250.3k    248k     28k devhost:root
30    p:1619  654.3k    652k     28k devhost:misc
31    p:1688  258.3k    256k     28k devhost:platform
32    p:1867 3878.3k   3876k     28k devhost:pci#1:1234:1111
33    p:1916   24.4M   24.4M     28k devhost:pci#3:8086:2922
34  j:1103   1475.7k   1464k         zircon-services
35    p:1104  298.3k    296k     28k crashlogger
36    p:1290  242.3k    240k     28k netsvc
37    p:2115  362.3k    360k     28k sh:console
38    p:2334  266.3k    264k     28k sh:vc
39    p:2441  306.3k    304k     28k /boot/bin/ps
40TASK           PSS PRIVATE  SHARED NAME
41```
42
43**PSS** (proportional shared state) is a number of bytes that estimates how much
44in-process mapped physical memory the process consumes. Its value is `PRIVATE +
45(SHARED / sharing-ratio)`, where `sharing-ratio` is based on the number of
46processes that share each of the pages in this process.
47
48The intent is that, e.g., if four processes share a single page, 1/4 of the
49bytes of that page is included in each of the four process's `PSS`. If two
50processes share a different page, then each gets 1/2 of that page's bytes.
51
52**PRIVATE** is the number of bytes that are mapped only by this process. I.e.,
53no other process maps this memory. Note that this does not account for private
54VMOs that are not mapped.
55
56**SHARED** is the number of bytes that are mapped by this process and at least
57one other process. Note that this does not account for shared VMOs that are not
58mapped. It also does not indicate how many processes share the memory: it could
59be 2, it could be 50.
60
61### Visualize memory usage
62
63If you have a Fuchsia build, you can use treemap to visualize memory usage by
64the system.
65
66 1. On your host machine, run the following command from the root of your
67    Fuchsia checkout:
68
69    ```./scripts/fx shell memgraph -vt | ./scripts/memory/treemap.py > mem.html```
70 2. Open `mem.html` in a browser.
71
72The `memgraph` tool generates a JSON description of system task and memory
73information, which is then parsed by the `treemap.py` script. `-vt` says
74to include VMOs and threads in the output.
75
76### Dump a process's detailed memory maps
77
78If you want to see why a specific process uses so much memory, you can run the
79`vmaps` tool on its koid (koid is the ID that shows up when running ps) to see
80what it has mapped into memory.
81
82```
83$ vmaps help
84Usage: vmaps <process-koid>
85
86Dumps a process's memory maps to stdout.
87
88First column:
89  "/A" -- Process address space
90  "/R" -- Root VMAR
91  "R"  -- VMAR (R for Region)
92  "M"  -- Mapping
93
94  Indentation indicates parent/child relationship.
95```
96
97Column tags:
98
99-   `:sz`: The virtual size of the entry, in bytes. Not all pages are
100    necessarily backed by physical memory.
101-   `:res`: The amount of memory "resident" in the entry, in bytes; i.e., the
102    amount of physical memory that backs the entry. This memory may be private
103    (only accessible by this process) or shared by multiple processes.
104-   `:vmo`: The `koid` of the VMO mapped into this region.
105
106```
107$ vmaps 2470
108/A ________01000000-00007ffffffff000    128.0T:sz                    'proc:2470'
109/R ________01000000-00007ffffffff000    128.0T:sz                    'root'
110...
111# This 'R' region is a dynamic library. The r-x section is .text, the r--
112# section is .rodata, and the rw- section is .data + .bss.
113R  00000187bc867000-00000187bc881000      104k:sz                    'useralloc'
114 M 00000187bc867000-00000187bc87d000 r-x   88k:sz   0B:res  2535:vmo 'libfdio.so'
115 M 00000187bc87e000-00000187bc87f000 r--    4k:sz   4k:res  2537:vmo 'libfdio.so'
116 M 00000187bc87f000-00000187bc881000 rw-    8k:sz   8k:res  2537:vmo 'libfdio.so'
117...
118# This 2MB anonymous mapping is probably part of the heap.
119M  0000246812b91000-0000246812d91000 rw-    2M:sz  76k:res  2542:vmo 'mmap-anonymous'
120...
121# This region looks like a stack: a big chunk of virtual space (:sz) with a
122# slightly-smaller mapping inside (accounting for a 4k guard page), and only a
123# small amount actually committed (:res).
124R  0000358923d92000-0000358923dd3000      260k:sz                    'useralloc'
125 M 0000358923d93000-0000358923dd3000 rw-  256k:sz  16k:res  2538:vmo ''
126...
127# The stack for the initial thread, which is allocated differently.
128M  0000400cbba84000-0000400cbbac4000 rw-  256k:sz   4k:res  2513:vmo 'initial-stack'
129...
130# The vDSO, which only has .text and .rodata.
131R  000047e1ab874000-000047e1ab87b000       28k:sz                    'useralloc'
132 M 000047e1ab874000-000047e1ab87a000 r--   24k:sz  24k:res  1031:vmo 'vdso/full'
133 M 000047e1ab87a000-000047e1ab87b000 r-x    4k:sz   4k:res  1031:vmo 'vdso/full'
134...
135# The main binary for this process.
136R  000059f5c7068000-000059f5c708d000      148k:sz                    'useralloc'
137 M 000059f5c7068000-000059f5c7088000 r-x  128k:sz   0B:res  2476:vmo '/boot/bin/sh'
138 M 000059f5c7089000-000059f5c708b000 r--    8k:sz   8k:res  2517:vmo '/boot/bin/sh'
139 M 000059f5c708b000-000059f5c708d000 rw-    8k:sz   8k:res  2517:vmo '/boot/bin/sh'
140...
141```
142
143### Dump all VMOs associated with a process
144
145```
146vmos <pid>
147```
148
149This will also show unmapped VMOs, which neither `ps` nor `vmaps` currently
150account for.
151
152It also shows whether a given VMO is a clone, along with its parent's koid.
153
154```
155$ vmos 1118
156rights  koid parent #chld #map #shr    size   alloc name
157rwxmdt  1170      -     0    1    1      4k      4k stack: msg of 0x5a
158r-xmdt  1031      -     2   28   14     28k     28k vdso/full
159     -  1298      -     0    1    1      2M     68k jemalloc-heap
160     -  1381      -     0    3    1    516k      8k self-dump-thread:0x12afe79c8b38
161     -  1233   1232     1    1    1   33.6k      4k libbacktrace.so
162     -  1237   1233     0    1    1      4k      4k data:libbacktrace.so
163...
164     -  1153   1146     1    1    1  883.2k     12k ld.so.1
165     -  1158   1153     0    1    1     16k     12k data:ld.so.1
166     -  1159      -     0    1    1     12k     12k bss:ld.so.1
167rights  koid parent #chld #map #shr    size   alloc name
168```
169
170Columns:
171
172-   `rights`: If the process points to the VMO via a handle, this column shows
173    the rights that the handle has, zero or more of:
174    -   `r`: `ZX_RIGHT_READ`
175    -   `w`: `ZX_RIGHT_WRITE`
176    -   `x`: `ZX_RIGHT_EXECUTE`
177    -   `m`: `ZX_RIGHT_MAP`
178    -   `d`: `ZX_RIGHT_DUPLICATE`
179    -   `t`: `ZX_RIGHT_TRANSFER`
180    -   **NOTE**: Non-handle entries will have a single '-' in this column.
181-   `koid`: The koid of the VMO, if it has one. Zero otherwise. A VMO without a
182    koid was created by the kernel, and has never had a userspace handle.
183-   `parent`: The koid of the VMO's parent, if it's a clone.
184-   `#chld`: The number of active clones (children) of the VMO.
185-   `#map`: The number of times the VMO is currently mapped into VMARs.
186-   `#shr`: The number of processes that map (share) the VMO.
187-   `size`: The VMO's current size, in bytes.
188-   `alloc`: The amount of physical memory allocated to the VMO, in bytes.
189    -   **NOTE**: If this column contains the value `phys`, it means that the
190        VMO points to a raw physical address range like a memory-mapped device.
191        `phys` VMOs do not consume RAM.
192-   `name`: The name of the VMO, or `-` if its name is empty.
193
194To relate this back to `ps`: each VMO contributes, for its mapped portions
195(since not all or any of a VMO's pages may be mapped):
196
197```
198PRIVATE =  #shr == 1 ? alloc : 0
199SHARED  =  #shr  > 1 ? alloc : 0
200PSS     =  PRIVATE + (SHARED / #shr)
201```
202
203### Dump "hidden" (unmapped and kernel) VMOs
204
205> NOTE: This is a kernel command, and will print to the kernel console.
206
207```
208k zx vmos hidden
209```
210
211Similar to `vmos <pid>`, but dumps all VMOs in the system that are not mapped
212into any process:
213-   VMOs that userspace has handles to but does not map
214-   VMOs that are mapped only into kernel space
215-   Kernel-only, unmapped VMOs that have no handles
216
217A `koid` value of zero means that only the kernel has a reference to that VMO.
218
219A `#map` value of zero means that the VMO is not mapped into any address space.
220
221**See also**: `k zx vmos all`, which dumps all VMOs in the system. **NOTE**:
222It's very common for this output to be truncated because of kernel console
223buffer limitations, so it's often better to combine the `k zx vmos hidden`
224output with a `vmaps` dump of each user process.
225
226### Limitations
227
228Neither `ps` nor `vmaps` currently account for:
229
230-   VMOs or VMO subranges that are not mapped. E.g., you could create a VMO,
231    write 1G of data into it, and it won't show up here.
232
233None of the process-dumping tools account for:
234
235-   Multiply-mapped pages. If you create multiple mappings using the same range
236    of a VMO, any committed pages of the VMO will be counted as many times as
237    those pages are mapped. This could be inside the same process, or could be
238    between processes if those processes share a VMO.
239
240    Note that "multiply-mapped pages" includes copy-on-write.
241-   Underlying kernel memory overhead for resources allocated by a process.
242    E.g., a process could have a million handles open, and those handles consume
243    kernel memory.
244
245    You can look at process handle consumption with the `k zx ps` command; run
246    `k zx ps help` for a description of its columns.
247-   Copy-on-write (COW) cloned VMOs. The clean (non-dirty, non-copied) pages of
248    a clone will not count towards "shared" for a process that maps the clone,
249    and those same pages may mistakenly count towards "private" of a process
250    that maps the parent (cloned) VMO.
251
252    TODO(dbort): Fix this; the tools were written before COW clones existed.
253
254## Kernel memory
255
256### Dump system memory arenas and kernel heap usage
257
258Running `kstats -m` will continuously dump information about physical memory
259usage and availability.
260
261```
262$ kstats -m
263--- 2017-06-07T05:51:08.021Z ---
264     total       free       VMOs      kheap      kfree      wired        mmu
265   2046.9M    1943.8M      20.7M       1.1M       0.9M      72.6M       7.8M
266
267--- 2017-06-07T05:51:09.021Z ---
268...
269```
270
271Fields:
272
273-   `2017-06-07T05:51:08.021Z`: Timestamp of when the stats were collected, as
274    an ISO 8601 string.
275-   `total`: The total amount of physical memory available to the system.
276-   `free`: The amount of unallocated memory.
277-   `VMOs`: The amount of memory committed to VMOs, both kernel and user. A
278    superset of all userspace memory. Does not include certain VMOs that fall
279    under `wired`.
280-   `kheap`: The amount of kernel heap memory marked as allocated.
281-   `kfree`: The amount of kernel heap memory marked as free.
282-   `wired`: The amount of memory reserved by and mapped into the kernel for
283    reasons not covered by other fields in this struct. Typically for readonly
284    data like the ram disk and kernel image, and for early-boot dynamic memory.
285-   `mmu`: The amount of memory used for architecture-specific MMU metadata like
286    page tables.
287
288### Dump the kernel address space
289
290> NOTE: This is a kernel command, and will print to the kernel console.
291
292```
293k zx asd kernel
294```
295
296Dumps the kernel's VMAR/mapping/VMO hierarchy, similar to the `vmaps` tool for
297user processes.
298
299```
300$ k zx asd kernel
301as 0xffffffff80252b20 [0xffffff8000000000 0xffffffffffffffff] sz 0x8000000000 fl 0x1 ref 71 'kernel'
302  vmar 0xffffffff802529a0 [0xffffff8000000000 0xffffffffffffffff] sz 0x8000000000 ref 1 'root'
303    map 0xffffff80015f89a0 [0xffffff8000000000 0xffffff8fffffffff] sz 0x1000000000 mmufl 0x18 vmo 0xffffff80015f8890/k0 off 0 p ages 0 ref 1 ''
304      vmo 0xffffff80015f8890/k0 size 0 pages 0 ref 1 parent k0
305    map 0xffffff80015f8b30 [0xffffff9000000000 0xffffff9000000fff] sz 0x1000 mmufl 0x18 vmo 0xffffff80015f8a40/k0 off 0 pages 0 ref 1 ''
306      object 0xffffff80015f8a40 base 0x7ffe2000 size 0x1000 ref 1
307    map 0xffffff80015f8cc0 [0xffffff9000001000 0xffffff9000001fff] sz 0x1000 mmufl 0x1a vmo 0xffffff80015f8bd0/k0 off 0 pages 0 ref 1 ''
308      object 0xffffff80015f8bd0 base 0xfed00000 size 0x1000 ref 1
309...
310```
311