Lines Matching refs:cgroup

11 conventions of cgroup v2.  It describes all userland-visible aspects
12 of cgroup including core and specific controller behaviors. All
14 v1 is available under :ref:`Documentation/admin-guide/cgroup-v1/index.rst <cgroup-v1>`.
20 1-2. What is cgroup?
70 5.9-1 Miscellaneous cgroup Interface Files
75 5-N-1. CPU controller root cgroup process behaviour
76 5-N-2. IO controller root cgroup process behaviour
100 "cgroup" stands for "control group" and is never capitalized. The
102 qualifier as in "cgroup controllers". When explicitly referring to
106 What is cgroup?
109 cgroup is a mechanism to organize processes hierarchically and
113 cgroup is largely composed of two parts - the core and controllers.
114 cgroup core is primarily responsible for hierarchically organizing
115 processes. A cgroup controller is usually responsible for
121 to one and only one cgroup. All threads of a process belong to the
122 same cgroup. On creation, all processes are put in the cgroup that
124 to another cgroup. Migration of a process doesn't affect already
128 disabled selectively on a cgroup. All controller behaviors are
129 hierarchical - if a controller is enabled on a cgroup, it affects all
131 sub-hierarchy of the cgroup. When a controller is enabled on a nested
132 cgroup, it always restricts the resource distribution further. The
143 Unlike v1, cgroup v2 has only single hierarchy. The cgroup v2
156 is no longer referenced in its current hierarchy. Because per-cgroup
173 automount the v1 cgroup filesystem and so hijack all controllers
178 cgroup v2 currently supports the following mount options.
181 Consider cgroup namespaces as delegation boundaries. This
188 Reduce the latencies of dynamic cgroup modifications such as
191 The static usage pattern of creating a cgroup, enabling
196 Only populate memory.events with data for the current cgroup,
220 Initially, only the root cgroup exists to which all processes belong.
221 A child cgroup can be created by creating a sub-directory::
225 A given cgroup may have multiple child cgroups forming a tree
226 structure. Each cgroup has a read-writable interface file
227 "cgroup.procs". When read, it lists the PIDs of all processes which
228 belong to the cgroup one-per-line. The PIDs are not ordered and the
230 another cgroup and then back or the PID got recycled while reading.
232 A process can be migrated into a cgroup by writing its PID to the
233 target cgroup's "cgroup.procs" file. Only one process can be migrated
239 cgroup that the forking process belongs to at the time of the
240 operation. After exit, a process stays associated with the cgroup
242 zombie process does not appear in "cgroup.procs" and thus can't be
243 moved to another cgroup.
245 A cgroup which doesn't have any children or live processes can be
246 destroyed by removing the directory. Note that a cgroup which doesn't
252 "/proc/$PID/cgroup" lists a process's cgroup membership. If legacy
253 cgroup is in use in the system, this file may contain multiple lines,
254 one for each hierarchy. The entry for cgroup v2 is always in the
257 # cat /proc/842/cgroup
259 0::/test-cgroup/test-cgroup-nested
261 If the process becomes a zombie and the cgroup it was associated with
264 # cat /proc/842/cgroup
266 0::/test-cgroup/test-cgroup-nested (deleted)
272 cgroup v2 supports thread granularity for a subset of controllers to
275 process belong to the same cgroup, which also serves as the resource
283 Marking a cgroup threaded makes it join the resource domain of its
284 parent as a threaded cgroup. The parent may be another threaded
285 cgroup whose resource domain is further up in the hierarchy. The root
295 As the threaded domain cgroup hosts all the domain resource
299 root cgroup is not subject to no internal process constraint, it can
302 The current operation mode or type of the cgroup is shown in the
303 "cgroup.type" file which indicates whether the cgroup is a normal
305 or a threaded cgroup.
307 On creation, a cgroup is always a domain cgroup and can be made
308 threaded by writing "threaded" to the "cgroup.type" file. The
311 # echo threaded > cgroup.type
313 Once threaded, the cgroup can't be made a domain again. To enable the
316 - As the cgroup will join the parent's resource domain. The parent
317 must either be a valid (threaded) domain or a threaded cgroup.
323 Topology-wise, a cgroup can be in an invalid state. Please consider
330 threaded cgroup. "cgroup.type" file will report "domain (invalid)" in
334 A domain cgroup is turned into a threaded domain when one of its child
335 cgroup becomes threaded or threaded controllers are enabled in the
336 "cgroup.subtree_control" file while there are processes in the cgroup.
340 When read, "cgroup.threads" contains the list of the thread IDs of all
341 threads in the cgroup. Except that the operations are per-thread
342 instead of per-process, "cgroup.threads" has the same format and
343 behaves the same way as "cgroup.procs". While "cgroup.threads" can be
344 written to in any cgroup, as it can only move threads inside the same
348 The threaded domain cgroup serves as the resource domain for the whole
350 all the processes are considered to be in the threaded domain cgroup.
351 "cgroup.procs" in a threaded domain cgroup contains the PIDs of all
353 However, "cgroup.procs" can be written to from anywhere in the subtree
354 to migrate all threads of the matching process to the cgroup.
359 threads in the cgroup and its descendants. All consumptions which
360 aren't tied to a specific thread belong to the threaded domain cgroup.
364 between threads in a non-leaf cgroup and its child cgroups. Each
371 Each non-root cgroup has a "cgroup.events" file which contains
372 "populated" field indicating whether the cgroup's sub-hierarchy has
374 the cgroup and its descendants; otherwise, 1. poll and [id]notify
380 in each cgroup::
387 file modified events will be generated on the "cgroup.events" files of
397 Each cgroup has a "cgroup.controllers" file which lists all
398 controllers available for the cgroup to enable::
400 # cat cgroup.controllers
404 disabled by writing to the "cgroup.subtree_control" file::
406 # echo "+cpu +memory -io" > cgroup.subtree_control
408 Only controllers which are listed in "cgroup.controllers" can be
413 Enabling a controller in a cgroup indicates that the distribution of
427 the cgroup's children, enabling it creates the controller's interface
433 "cgroup." are owned by the parent rather than the cgroup itself.
439 Resources are distributed top-down and a cgroup can further distribute
441 parent. This means that all non-root "cgroup.subtree_control" files
443 "cgroup.subtree_control" file. A controller can be enabled only if
454 controllers enabled in their "cgroup.subtree_control" files.
461 The root cgroup is exempt from this restriction. Root contains
464 controllers. How resource consumption in the root cgroup is governed
470 enabled controller in the cgroup's "cgroup.subtree_control". This is
472 populated cgroup. To control resource distribution of a cgroup, the
473 cgroup must create children and transfer all its processes to the
474 children before enabling controllers in its "cgroup.subtree_control"
484 A cgroup can be delegated in two ways. First, to a less privileged
485 user by granting write access of the directory and its "cgroup.procs",
486 "cgroup.threads" and "cgroup.subtree_control" files to the user.
488 cgroup namespace on namespace creation.
494 kernel rejects writes to all files other than "cgroup.procs" and
495 "cgroup.subtree_control" on a namespace root from inside the
506 Currently, cgroup doesn't impose any restrictions on the number of
519 to migrate a target process into a cgroup by writing its PID to the
520 "cgroup.procs" file.
522 - The writer must have write access to the "cgroup.procs" file.
524 - The writer must have write access to the "cgroup.procs" file of the
536 ~ cgroup ~ \ C01
541 currently in C10 into "C00/cgroup.procs". U0 has write access to the
542 file; however, the common ancestor of the source cgroup C10 and the
543 destination cgroup C00 is above the points of delegation and U0 would
544 not have write access to its "cgroup.procs" files and thus the write
567 should be assigned to a cgroup according to the system's logical and
576 Interface files for a cgroup and its children cgroups occupy the same
580 All cgroup core interface files are prefixed with "cgroup." and each
588 cgroup doesn't do anything to prevent name collisions and it's the
595 cgroup controllers implement several resource distribution schemes
637 "io.max" limits the maximum BPS and/or IOPS that a cgroup can consume
645 A cgroup is protected up to the configured amount of the resource
666 A cgroup is exclusively allocated a certain amount of a finite
730 - The root cgroup should be exempt from resource control and thus
768 # cat cgroup-example-interface-file
774 # echo 125 > cgroup-example-interface-file
778 # echo "default 125" > cgroup-example-interface-file
782 # echo "8:16 170" > cgroup-example-interface-file
786 # echo "8:0 default" > cgroup-example-interface-file
787 # cat cgroup-example-interface-file
800 All cgroup core files are prefixed with "cgroup."
802 cgroup.type
806 When read, it indicates the current type of the cgroup, which
809 - "domain" : A normal valid domain cgroup.
811 - "domain threaded" : A threaded domain cgroup which is
814 - "domain invalid" : A cgroup which is in an invalid state.
816 be allowed to become a threaded cgroup.
818 - "threaded" : A threaded cgroup which is a member of a
821 A cgroup can be turned into a threaded cgroup by writing
824 cgroup.procs
829 the cgroup one-per-line. The PIDs are not ordered and the
831 to another cgroup and then back or the PID got recycled while
835 the PID to the cgroup. The writer should match all of the
838 - It must have write access to the "cgroup.procs" file.
840 - It must have write access to the "cgroup.procs" file of the
846 In a threaded cgroup, reading this file fails with EOPNOTSUPP
848 supported and moves every thread of the process to the cgroup.
850 cgroup.threads
855 the cgroup one-per-line. The TIDs are not ordered and the
857 another cgroup and then back or the TID got recycled while
861 TID to the cgroup. The writer should match all of the
864 - It must have write access to the "cgroup.threads" file.
866 - The cgroup that the thread is currently in must be in the
867 same resource domain as the destination cgroup.
869 - It must have write access to the "cgroup.procs" file of the
875 cgroup.controllers
880 the cgroup. The controllers are not ordered.
882 cgroup.subtree_control
888 cgroup to its children.
897 cgroup.events
904 1 if the cgroup or its descendants contains any live
907 1 if the cgroup is frozen; otherwise, 0.
909 cgroup.max.descendants
914 an attempt to create a new cgroup in the hierarchy will fail.
916 cgroup.max.depth
919 Maximum allowed descent depth below the current cgroup.
921 an attempt to create a new child cgroup will fail.
923 cgroup.stat
930 Total number of dying descendant cgroups. A cgroup becomes
931 dying after being deleted by a user. The cgroup will remain
935 A process can't enter a dying cgroup under any circumstances,
936 a dying cgroup can't revive.
938 A dying cgroup can consume system resources not exceeding
939 limits, which were active at the moment of cgroup deletion.
941 cgroup.freeze
945 Writing "1" to the file causes freezing of the cgroup and all
947 be stopped and will not run until the cgroup will be explicitly
948 unfrozen. Freezing of the cgroup may take some time; when this action
949 is completed, the "frozen" value in the cgroup.events control file
953 A cgroup can be frozen either by its own settings, or by settings
955 cgroup will remain frozen.
957 Processes in the frozen cgroup can be killed by a fatal signal.
958 They also can enter and leave a frozen cgroup: either by an explicit
959 move by a user, or if freezing of the cgroup races with fork().
960 If a process is moved to a frozen cgroup, it stops. If a process is
961 moved out of a frozen cgroup, it becomes running.
963 Frozen status of a cgroup doesn't affect any cgroup tree operations:
964 it's possible to delete a frozen (and empty) cgroup, as well as
967 cgroup.kill
971 Writing "1" to the file causes the cgroup and all descendant cgroups to
972 be killed. This means that all processes located in the affected cgroup
975 Killing a cgroup tree will deal with concurrent forks appropriately and
978 In a threaded cgroup, writing this file fails with EOPNOTSUPP as
982 cgroup.pressure
986 Writing "0" to the file will disable the cgroup PSI accounting.
987 Writing "1" to the file will re-enable the cgroup PSI accounting.
990 accounting in a cgroup does not affect PSI accounting in descendants
994 each cgroup separately and aggregates it at each level of the hierarchy.
1027 the root cgroup. Be aware that system management software may already
1029 process, and these processes may need to be moved to the root cgroup
1136 cgroup are tracked so that the total memory consumption can be
1160 The total amount of memory currently being used by the cgroup
1167 Hard memory protection. If the memory usage of a cgroup
1168 is within its effective min boundary, the cgroup's memory
1178 (child cgroup or cgroups are requiring more protected memory
1179 than parent will allow), then each child cgroup will get
1186 If a memory cgroup is not populated with processes,
1194 cgroup is within its effective low boundary, the cgroup's
1204 (child cgroup or cgroups are requiring more protected memory
1205 than parent will allow), then each child cgroup will get
1217 control memory usage of a cgroup. If a cgroup's usage goes
1218 over the high boundary, the processes of the cgroup are
1229 mechanism. If a cgroup's memory usage reaches this limit and
1230 can't be reduced, the OOM killer is invoked in the cgroup.
1249 target cgroup.
1263 the target cgroup. If less bytes are reclaimed than the
1268 memory cgroup. Therefore socket memory balancing triggered by
1277 The max memory usage recorded for the cgroup and its
1278 descendants since the creation of the cgroup.
1284 Determines whether the cgroup should be treated as
1286 all tasks belonging to the cgroup or to its descendants
1287 (if the memory cgroup is not a leaf cgroup) are killed
1294 If the OOM killer is invoked in a cgroup, it's not going
1295 to kill any tasks outside of this cgroup, regardless
1306 hierarchy. For the local events at the cgroup level see
1310 The number of times the cgroup is reclaimed due to
1316 The number of times processes of the cgroup are
1319 cgroup whose memory usage is capped by the high limit
1324 The number of times the cgroup's memory usage was
1326 fails to bring it down, the cgroup goes to OOM state.
1329 The number of time the cgroup's memory usage was
1337 The number of processes belonging to this cgroup
1345 to the cgroup i.e. not hierarchical. The file modified event
1351 This breaks down the cgroup's memory footprint into different
1540 This breaks down the cgroup's memory footprint into different
1566 The total amount of swap currently being used by the cgroup
1573 Swap usage throttle limit. If a cgroup's swap usage exceeds
1577 This limit marks a point of no return for the cgroup. It is NOT
1580 prohibits swapping past a set amount, but lets the cgroup
1589 Swap usage hard limit. If a cgroup's swap usage reaches this
1590 limit, anonymous memory of the cgroup will not be swapped out.
1599 The number of times the cgroup's swap usage was over
1603 The number of times the cgroup's swap usage was about
1628 Zswap usage hard limit. If a cgroup's zswap pool reaches this
1648 throttles the offending cgroup, a management agent has ample
1652 Determining whether a cgroup has enough memory is not trivial as
1666 A memory area is charged to the cgroup which instantiated it and stays
1667 charged to the cgroup until the area is released. Migrating a process
1668 to a different cgroup doesn't move the memory usages that it
1669 instantiated while in the previous cgroup to the new cgroup.
1672 To which cgroup the area will be charged is in-deterministic; however,
1673 over time, the memory area is likely to end up in a cgroup which has
1676 If a cgroup sweeps a considerable amount of memory which is expected
1717 cgroup.
1772 cgroup.
1809 If needed, tools/cgroup/iocost_coef_gen.py can be used to
1820 the cgroup can use in relation to its siblings.
1892 per-cgroup dirty memory states are examined and the more restrictive
1895 cgroup writeback requires explicit support from the underlying
1896 filesystem. Currently, cgroup writeback is implemented on ext2, ext4,
1898 attributed to the root cgroup.
1901 which affects how cgroup ownership is tracked. Memory is tracked per
1903 inode is assigned to a cgroup and all IO requests to write dirty pages
1904 from the inode are attributed to that cgroup.
1906 As cgroup ownership for memory is tracked per page, there can be pages
1910 cgroup becomes the majority over a certain period of time, switches
1911 the ownership of the inode to that cgroup.
1914 mostly dirtied by a single cgroup even when the main writing cgroup
1924 The sysctl knobs which affect writeback behavior are applied to cgroup
1928 These ratios apply the same to cgroup writeback with the
1933 For cgroup writeback, this is calculated into ratio against
1941 This is a cgroup v2 controller for IO workload protection. You provide a group
2020 A single attribute controls the behavior of the I/O priority cgroup policy,
2074 The process number controller is used to allow a cgroup to stop any
2078 The number of tasks in a cgroup can be exhausted in ways which other
2099 The number of processes currently in the cgroup and its
2102 Organisational operations are not blocked by cgroup policies, so it is
2105 processes to the cgroup such that pids.current is larger than
2106 pids.max. However, it is not possible to violate a cgroup PID policy
2108 of a new process would cause a cgroup policy to be violated.
2116 specified in the cpuset interface files in a task's current cgroup.
2134 cgroup. The actual list of CPUs to be granted, however, is
2144 An empty value indicates that the cgroup is using the same
2145 setting as the nearest cgroup ancestor with a non-empty
2156 cgroup by its parent. These CPUs are allowed to be used by
2157 tasks within the current cgroup.
2160 all the CPUs from the parent cgroup that can be available to
2161 be used by this cgroup. Otherwise, it should be a subset of
2173 this cgroup. The actual list of memory nodes granted, however,
2183 An empty value indicates that the cgroup is using the same
2184 setting as the nearest cgroup ancestor with a non-empty
2192 tasks within the cgroup to be migrated to the designated nodes if
2207 this cgroup by its parent. These memory nodes are allowed to
2208 be used by tasks within the current cgroup.
2211 parent cgroup that will be available to be used by this cgroup.
2220 cpuset-enabled cgroups. This flag is owned by the parent cgroup
2231 The root cgroup is always a partition root and its state
2235 When set to "root", the current cgroup is the root of a new
2278 2) The parent cgroup is a valid partition root.
2286 Note that a task cannot be moved to a cgroup with empty
2322 on top of cgroup BPF. To control access to device files, a user may
2370 It exists for all the cgroup except root.
2388 the cgroup except root.
2392 The default value is "max". It exists for all the cgroup except root.
2402 are local to the cgroup i.e. not hierarchical. The file modified event
2407 hugetlb pages of <hugepagesize> in this cgroup. Only active in
2413 The Miscellaneous cgroup provides the resource limiting and tracking
2415 cgroup resources. Controller is enabled by the CONFIG_CGROUP_MISC config
2420 in the kernel/cgroup/misc.c file. Provider of the resource must set its
2433 A read-only flat-keyed file shown only in the root cgroup. It shows
2443 the current usage of the resources in the cgroup and its children.::
2451 maximum usage of the resources in the cgroup and its children.::
2475 The number of times the cgroup's resource usage was
2481 A miscellaneous scalar resource is charged to the cgroup in which it is used
2482 first, and stays charged to that cgroup until that resource is freed. Migrating
2483 a process to a different cgroup does not move the charge to the destination
2484 cgroup where the process has moved.
2494 always be filtered by cgroup v2 path. The controller can still be
2505 CPU controller root cgroup process behaviour
2508 When distributing CPU cycles in the root cgroup each thread in this
2509 cgroup is treated as if it was hosted in a separate child cgroup of the
2510 root cgroup. This child cgroup weight is dependent on its thread nice
2518 IO controller root cgroup process behaviour
2521 Root cgroup processes are hosted in an implicit leaf child node.
2523 account as if it was a normal child cgroup of the root cgroup with a
2533 cgroup namespace provides a mechanism to virtualize the view of the
2534 "/proc/$PID/cgroup" file and cgroup mounts. The CLONE_NEWCGROUP clone
2535 flag can be used with clone(2) and unshare(2) to create a new cgroup
2536 namespace. The process running inside the cgroup namespace will have
2537 its "/proc/$PID/cgroup" output restricted to cgroupns root. The
2538 cgroupns root is the cgroup of the process at the time of creation of
2539 the cgroup namespace.
2541 Without cgroup namespace, the "/proc/$PID/cgroup" file shows the
2542 complete path of the cgroup of a process. In a container setup where
2544 "/proc/$PID/cgroup" file may leak potential system level information
2547 # cat /proc/self/cgroup
2551 and undesirable to expose to the isolated processes. cgroup namespace
2553 creating a cgroup namespace, one would see::
2555 # ls -l /proc/self/ns/cgroup
2556 lrwxrwxrwx 1 root root 0 2014-07-15 10:37 /proc/self/ns/cgroup -> cgroup:[4026531835]
2557 # cat /proc/self/cgroup
2562 # ls -l /proc/self/ns/cgroup
2563 lrwxrwxrwx 1 root root 0 2014-07-15 10:35 /proc/self/ns/cgroup -> cgroup:[4026532183]
2564 # cat /proc/self/cgroup
2567 When some thread from a multi-threaded process unshares its cgroup
2572 A cgroup namespace is alive as long as there are processes inside or
2573 mounts pinning it. When the last usage goes away, the cgroup
2581 The 'cgroupns root' for a cgroup namespace is the cgroup in which the
2583 /batchjobs/container_id1 cgroup calls unshare, cgroup
2585 init_cgroup_ns, this is the real root ('/') cgroup.
2587 The cgroupns root cgroup does not change even if the namespace creator
2588 process later moves to a different cgroup::
2590 # ~/unshare -c # unshare cgroupns in some cgroup
2591 # cat /proc/self/cgroup
2594 # echo 0 > sub_cgrp_1/cgroup.procs
2595 # cat /proc/self/cgroup
2598 Each process gets its namespace-specific view of "/proc/$PID/cgroup"
2600 Processes running inside the cgroup namespace will be able to see
2601 cgroup paths (in /proc/self/cgroup) only inside their root cgroup.
2606 # echo 7353 > sub_cgrp_1/cgroup.procs
2607 # cat /proc/7353/cgroup
2610 From the initial cgroup namespace, the real cgroup path will be
2613 $ cat /proc/7353/cgroup
2616 From a sibling cgroup namespace (that is, a namespace rooted at a
2617 different cgroup), the cgroup path relative to its own cgroup
2618 namespace root will be shown. For instance, if PID 7353's cgroup
2621 # cat /proc/7353/cgroup
2625 its relative to the cgroup namespace root of the caller.
2631 Processes inside a cgroup namespace can move into and out of the
2637 # cat /proc/7353/cgroup
2639 # echo 7353 > batchjobs/container_id2/cgroup.procs
2640 # cat /proc/7353/cgroup
2643 Note that this kind of setup is not encouraged. A task inside cgroup
2646 setns(2) to another cgroup namespace is allowed when:
2649 (b) the process has CAP_SYS_ADMIN against the target cgroup
2652 No implicit cgroup changes happen with attaching to another cgroup
2654 process under the target cgroup namespace root.
2660 Namespace specific cgroup hierarchy can be mounted by a process
2661 running inside a non-init cgroup namespace::
2665 This will mount the unified cgroup hierarchy with cgroupns root as the
2669 The virtualization of /proc/self/cgroup file combined with restricting
2670 the view of cgroup hierarchy by namespace-private cgroupfs mount
2671 provides a properly isolated cgroup view inside the container.
2678 where interacting with cgroup is necessary. cgroup core and
2685 A filesystem can support cgroup writeback by updating
2691 associates the bio with the inode's owner cgroup and the
2702 With writeback bio's annotated, cgroup support can be enabled per
2704 selective disabling of cgroup writeback support which is helpful when
2708 wbc_init_bio() binds the specified bio to its cgroup. Depending on
2724 - The "tasks" file is removed and "cgroup.procs" is not sorted.
2726 - "cgroup.clone_children" is removed.
2728 - /proc/cgroups is meaningless for v2. Use "cgroup.controllers" file
2738 cgroup v1 allowed an arbitrary number of hierarchies and each
2760 It greatly complicated cgroup core implementation but more importantly
2761 the support for multiple hierarchies restricted how cgroup could be
2765 that a thread's cgroup membership couldn't be described in finite
2791 cgroup v1 allowed threads of a process to belong to different cgroups.
2802 cgroup v1 had an ambiguously defined delegation model which got abused
2806 effectively raised cgroup to the status of a syscall-like API exposed
2809 First of all, cgroup has a fundamentally inadequate interface to be
2811 extract the path on the target hierarchy from /proc/self/cgroup,
2818 cgroup controllers implemented a number of knobs which would never be
2820 system-management pseudo filesystem. cgroup ended up with interface
2824 effectively abusing cgroup as a shortcut to implementing public APIs
2835 cgroup v1 allowed threads to be in any cgroups which created an
2836 interesting problem where threads belonging to a parent cgroup and its
2842 mapped nice levels to cgroup weights. This worked for some cases but
2851 cgroup to host the threads. The hidden leaf had its own copies of all
2867 made cgroup as a whole highly inconsistent.
2869 This clearly is a problem which needs to be addressed from cgroup core
2876 cgroup v1 grew without oversight and developed a large number of
2877 idiosyncrasies and inconsistencies. One issue on the cgroup core side
2878 was how an empty cgroup was notified - a userland helper binary was
2887 cgroup. Some controllers exposed a large amount of inconsistent
2890 There also was no consistency across controllers. When a new cgroup
2898 cgroup v2 establishes common conventions where appropriate and updates
2923 reserve. A cgroup enjoys reclaim protection when it's within its
2966 cgroup design was that global or parental pressure would always be
2975 that cgroup controllers should account and limit specific physical