/linux-6.3-rc2/Documentation/hwmon/ |
A D | ibmpowernv.rst | 18 'hwmon' populates the 'sysfs' tree having attribute files, each for a given 21 All the nodes in the DT appear under "/ibm,opal/sensors" and each valid node in 45 each OCC. Using this attribute each OCC can be asked to 58 each OCC. Using this attribute each OCC can be asked to 69 each OCC. Using this attribute each OCC can be asked to 80 each OCC. Using this attribute each OCC can be asked to
|
/linux-6.3-rc2/drivers/net/ethernet/qlogic/qlcnic/ |
A D | qlcnic_dcb.c | 570 struct qlcnic_dcb_param *each; in qlcnic_83xx_dcb_query_cee_param() local 597 each = &mbx_out.type[j]; in qlcnic_83xx_dcb_query_cee_param() 601 each->prio_pg_map[0] = cmd.rsp.arg[k++]; in qlcnic_83xx_dcb_query_cee_param() 603 each->pg_bw_map[0] = cmd.rsp.arg[k++]; in qlcnic_83xx_dcb_query_cee_param() 604 each->pg_bw_map[1] = cmd.rsp.arg[k++]; in qlcnic_83xx_dcb_query_cee_param() 605 each->pg_tsa_map[0] = cmd.rsp.arg[k++]; in qlcnic_83xx_dcb_query_cee_param() 606 each->pg_tsa_map[1] = cmd.rsp.arg[k++]; in qlcnic_83xx_dcb_query_cee_param() 607 val = each->hdr_prio_pfc_map[0]; in qlcnic_83xx_dcb_query_cee_param() 611 each->app[i] = cmd.rsp.arg[i + k]; in qlcnic_83xx_dcb_query_cee_param() 656 struct qlcnic_dcb_param *each, in qlcnic_dcb_fill_cee_tc_params() argument [all …]
|
/linux-6.3-rc2/tools/testing/selftests/firmware/ |
A D | settings | 2 # 2 seconds). There are 3 test configs, each done with and without firmware 3 # present, each with 2 "nowait" functions tested 5 times. Expected time for a 5 # Additionally, fw_fallback may take 5 seconds for internal timeouts in each 7 # 10 seconds for each testing config: 120 + 15 + 30
|
/linux-6.3-rc2/tools/perf/tests/shell/ |
A D | stat_bpf_counters_cgrp.sh | 15 if ! perf stat -a --bpf-counters --for-each-cgroup / true > /dev/null 2>&1; then 18 perf --no-pager stat -a --bpf-counters --for-each-cgroup / true || true 53 …output=$(perf stat -a --bpf-counters --for-each-cgroup ${test_cgroups} -e cpu-clock -x, sleep 1 2… 67 …output=$(perf stat -C 1 --bpf-counters --for-each-cgroup ${test_cgroups} -e cpu-clock -x, taskset …
|
/linux-6.3-rc2/scripts/ |
A D | find-unused-docs.sh | 44 for each in "${files_included[@]}"; do 45 FILES_INCLUDED[$each]="$each"
|
/linux-6.3-rc2/Documentation/devicetree/bindings/phy/ |
A D | apm-xgene-phy.txt | 19 Two set of 3-tuple setting for each (up to 3) 25 Two set of 3-tuple setting for each (up to 3) 28 gain control. Two set of 3-tuple setting for each 32 each (up to 3) supported link speed on the host. 36 3-tuple setting for each (up to 3) supported link 40 3-tuple setting for each (up to 3) supported link 46 - apm,tx-speed : Tx operating speed. One set of 3-tuple for each
|
A D | phy-tegra194-p2u.yaml | 14 Speed) each interfacing with 12 and 8 P2U instances respectively. 16 each interfacing with 8, 8 and 8 P2U instances respectively. 29 description: Should be the physical address space and length of respective each P2U instance.
|
/linux-6.3-rc2/Documentation/devicetree/bindings/gpio/ |
A D | gpio-max3191x.txt | 18 - maxim,modesel-gpios: GPIO pins to configure modesel of each chip. 20 (if each chip is driven by a separate pin) or 1 22 - maxim,fault-gpios: GPIO pins to read fault of each chip. 25 - maxim,db0-gpios: GPIO pins to configure debounce of each chip. 28 - maxim,db1-gpios: GPIO pins to configure debounce of each chip.
|
/linux-6.3-rc2/Documentation/filesystems/nfs/ |
A D | pnfs.rst | 6 reference multiple devices, each of which can reference multiple data servers. 20 We reference the header for the inode pointing to it, across each 22 LAYOUTCOMMIT), and for each lseg held within. 34 nfs4_deviceid_cache). The cache itself is referenced across each 36 the lifetime of each lseg referencing them. 66 layout types: "files", "objects", "blocks", and "flexfiles". For each
|
/linux-6.3-rc2/Documentation/devicetree/bindings/pinctrl/ |
A D | pinctrl-bindings.txt | 9 designated client devices. Again, each client device must be represented as a 16 device is inactive. Hence, each client device can define a set of named 35 For each client device individually, every pin state is assigned an integer 36 ID. These numbers start at 0, and are contiguous. For each state ID, a unique 47 pinctrl-0: List of phandles, each pointing at a pin configuration 52 from multiple nodes for a single pin controller, each 65 pinctrl-1: List of phandles, each pointing at a pin configuration 68 pinctrl-n: List of phandles, each pointing at a pin configuration
|
/linux-6.3-rc2/Documentation/mm/damon/ |
A D | design.rst | 96 Below four sections describe each of the DAMON core mechanisms and the five 108 access to each page per ``sampling interval`` and aggregates the results. In 109 other words, counts the number of the accesses to each page. After each 135 one page in the region is required to be checked. Thus, for each ``sampling 136 interval``, DAMON randomly picks one page in each region, waits for one 153 adaptively merges and splits each region based on their access frequency. 155 For each ``aggregation interval``, it compares the access frequencies of 157 after it reports and clears the aggregated access frequency of each region, it 158 splits each region into two or three regions if the total number of regions 175 abstracted monitoring target memory area only for each of a user-specified time
|
/linux-6.3-rc2/Documentation/userspace-api/media/v4l/ |
A D | ext-ctrls-detect.rst | 37 - The image is divided into a grid, each cell with its own motion 41 - The image is divided into a grid, each cell with its own region 55 Sets the motion detection thresholds for each cell in the grid. To 61 Sets the motion detection region value for each cell in the grid. To
|
/linux-6.3-rc2/Documentation/bpf/ |
A D | map_cgroup_storage.rst | 127 per-CPU variant will have different memory regions for each CPU for each 128 storage. The non-per-CPU will have the same memory region for each storage. 133 multiple attach types, and each attach creates a fresh zeroed storage. The 136 There is a one-to-one association between the map of each type (per-CPU and 138 each map can only be used by one BPF program and each BPF program can only use 139 one storage map of each type. Because of map can only be used by one BPF 153 However, the BPF program can still only associate with one map of each type
|
/linux-6.3-rc2/Documentation/devicetree/bindings/powerpc/4xx/ |
A D | cpm.txt | 16 - unused-units : specifier consist of one cell. For each 20 - idle-doze : specifier consist of one cell. For each 24 - standby : specifier consist of one cell. For each 28 - suspend : specifier consist of one cell. For each
|
/linux-6.3-rc2/Documentation/admin-guide/mm/damon/ |
A D | usage.rst | 65 figure, parents-children relations are represented with indentations, each 134 for each DAMON-based operation scheme of the kdamond. For details of the 140 each DAMON-based operation scheme of the kdamond. For details of the 213 to ``N-1``. Each directory represents each monitoring target. 218 In each target directory, one file (``pid_target``) and one directory 347 To allow easy activation and deactivation of each scheme based on system 432 exposing detailed information about each of the memory region that the 445 In each region directory, you will find four files (``start``, ``end``, 610 the file, each of the schemes should be represented in each line in below 704 will show each scheme you entered in each line, and the five numbers for the [all …]
|
/linux-6.3-rc2/tools/power/cpupower/ |
A D | TODO | 17 -> Bind forked process to each cpu. 19 each cpu. 22 each cpu.
|
/linux-6.3-rc2/Documentation/ABI/testing/ |
A D | procfs-smaps_rollup | 7 except instead of an entry for each VMA in a process, 9 for which each field is the sum of the corresponding 13 the sum of the Pss field of each type (anon, file, shmem).
|
/linux-6.3-rc2/Documentation/devicetree/bindings/interconnect/ |
A D | qcom,rpmh-common.yaml | 17 associated with each execution environment. Provider nodes must point to at 18 least one RPMh device child node pertaining to their RSC and each provider 37 Names for each of the qcom,bcm-voters specified.
|
/linux-6.3-rc2/Documentation/devicetree/bindings/iio/adc/ |
A D | aspeed,ast2600-adc.yaml | 14 • The device split into two individual engine and each contains 8 voltage 18 • Programmable upper and lower threshold for each channels. 19 • Interrupt when larger or less than threshold for each channels. 20 • Support hysteresis for each channels.
|
/linux-6.3-rc2/Documentation/devicetree/bindings/dma/ |
A D | st,stm32-mdma.yaml | 13 described in the dma.txt file, using a five-cell specifier for each channel: 24 0x2: Source address pointer is incremented after each data transfer 25 0x3: Source address pointer is decremented after each data transfer 28 0x2: Destination address pointer is incremented after each data transfer 29 0x3: Destination address pointer is decremented after each data transfer
|
/linux-6.3-rc2/Documentation/leds/ |
A D | leds-qcom-lpg.rst | 16 channels. The output of each PWM channel is routed to other hardware 19 The each PWM channel can operate with a period between 27us and 384 seconds and 37 therefore be identical for each element in the pattern (except for the pauses 39 transitions expected by the leds-trigger-pattern format, each entry in the 73 mode, in which case each run through the pattern is performed by first running
|
/linux-6.3-rc2/Documentation/cpu-freq/ |
A D | cpufreq-stats.rst | 22 cpufreq-stats is a driver that provides CPU frequency statistics for each CPU. 25 in /sysfs (<sysfs root>/devices/system/cpu/cpuX/cpufreq/stats/) for each CPU. 65 This gives the amount of time spent in each of the frequencies supported by 66 this CPU. The cat output will have "<frequency> <time>" pair in each line, which 68 will have one line for each of the supported frequencies. usertime units here 100 also contains the actual freq values for each row and column for better
|
/linux-6.3-rc2/Documentation/virt/acrn/ |
A D | io-request.rst | 14 For each User VM, there is a shared 4-KByte memory region used for I/O requests 20 used as an array of 16 I/O request slots with each I/O request slot being 256 27 GPA falls in a certain range. Multiple I/O clients can be associated with each 28 User VM. There is a special client associated with each User VM, called the 30 any other clients. The ACRN userspace acts as the default client for each User
|
/linux-6.3-rc2/Documentation/networking/ |
A D | scaling.rst | 30 applying a filter to each packet that assigns it to one of a small number 79 that can route each interrupt to a particular CPU. The active mapping 100 interrupts (and thus work) grows with each additional queue. 141 RPS may enqueue packets for processing. For each received packet, 157 can be configured for each receive queue using a sysfs file entry:: 177 receive queue is mapped to each CPU, then RPS is probably redundant 278 for each flow: rps_dev_flow_table is a table specific to each hardware 384 configured for each receive queue by the driver, so no additional 411 (contention can be eliminated completely if each CPU has its own 425 threads are not pinned to CPUs and each thread handles packets [all …]
|
/linux-6.3-rc2/Documentation/devicetree/bindings/sound/ |
A D | nvidia,tegra30-ahub.txt | 8 - reg : Should contain the register physical address and length for each of 13 - clocks : Must contain an entry for each entry in clock-names. 18 - resets : Must contain an entry for each entry in reset-names. 47 - dmas : Must contain an entry for each entry in clock-names.
|