/linux-6.3-rc2/Documentation/block/ |
A D | stat.rst | 29 read I/Os requests number of read I/Os processed 32 read ticks milliseconds total wait time for read requests 33 write I/Os requests number of write I/Os processed 36 write ticks milliseconds total wait time for write requests 37 in_flight requests number of I/Os currently in flight 39 time_in_queue milliseconds total wait time for all requests 40 discard I/Os requests number of discard I/Os processed 43 discard ticks milliseconds total wait time for discard requests 44 flush I/Os requests number of flush I/Os processed 45 flush ticks milliseconds total wait time for flush requests [all …]
|
A D | blk-mq.rst | 9 through queueing and submitting IO requests to block devices simultaneously, 53 layer or if we want to try to merge requests. In both cases, requests will be 58 to process those requests. However, if the hardware does not have enough 59 resources to accept more requests, blk-mq will places requests on a temporary 65 The block IO subsystem adds requests in the software staging queues 73 The staging queue can be used to merge requests for adjacent sectors. For 77 number of individual requests. This technique of merging requests is called 113 added to a linked list (``hctx->dispatch``) of requests. Then, 116 requests that were ready to be sent first. The number of hardware queues 120 hardware queues to send requests for. [all …]
|
A D | writeback_cache_control.rst | 17 a forced cache flush, and the Force Unit Access (FUA) flag for requests. 26 guarantees that previously completed write requests are on non-volatile 58 on non-empty bios can simply be ignored, and REQ_PREFLUSH requests without 68 support required, the block layer completes empty REQ_PREFLUSH requests before 70 requests that have a payload. For devices with volatile write caches the 76 and handle empty REQ_OP_FLUSH requests in its prep_fn/request_fn. Note that 77 REQ_PREFLUSH requests with a payload are automatically turned into a sequence 84 and the driver must handle write requests that have the REQ_FUA bit set
|
/linux-6.3-rc2/Documentation/devicetree/bindings/dma/ |
A D | lpc1850-dmamux.txt | 11 - dma-requests: Number of DMA requests for the mux 15 - dma-requests: Number of DMA requests the controller can handle 28 dma-requests = <16>; 40 dma-requests = <64>;
|
A D | fsl-imx-dma.txt | 18 - dma-requests : Number of DMA requests supported. 19 - #dma-requests : deprecated 34 Clients have to specify the DMA requests with phandles in a list. 40 - dma-names: List of string identifiers for the DMA requests. For the correct
|
A D | ti-dma-crossbar.txt | 9 - dma-requests: Number of DMA requests the crossbar can receive 13 - dma-requests: Number of DMA requests the controller can handle 43 dma-requests = <127>; 51 dma-requests = <205>;
|
A D | renesas,rzn1-dmamux.yaml | 34 dma-requests: 39 - dma-requests 50 dma-requests = <32>;
|
A D | dma-router.yaml | 18 have more peripherals integrated with DMA requests than what the DMA 33 dma-requests: 49 dma-requests = <205>;
|
A D | owl-dma.yaml | 42 dma-requests: 59 - dma-requests 76 dma-requests = <46>;
|
/linux-6.3-rc2/Documentation/virt/acrn/ |
A D | io-request.rst | 14 For each User VM, there is a shared 4-KByte memory region used for I/O requests 26 An I/O client is responsible for handling User VM I/O requests whose accessed 29 default client, that handles all I/O requests that do not fit into the range of 33 Below illustration shows the relationship between I/O requests shared buffer, 34 I/O requests and I/O clients. 84 4. Processing flow of I/O requests 91 c. The upcall handler schedules a worker to dispatch I/O requests. 92 d. The worker looks for the PENDING I/O requests, assigns them to different 95 e. The notified client handles the assigned I/O requests. 96 f. The HSM updates I/O requests states to COMPLETE and notifies the hypervisor
|
/linux-6.3-rc2/drivers/gpu/drm/i915/gt/ |
A D | intel_gt_requests.c | 21 list_for_each_entry_safe(rq, rn, &tl->requests, link) in retire_requests() 31 return !list_empty(&engine->kernel_context->timeline->requests); in engine_active() 208 container_of(work, typeof(*gt), requests.retire_work.work); in retire_work_handler() 210 schedule_delayed_work(>->requests.retire_work, in retire_work_handler() 217 INIT_DELAYED_WORK(>->requests.retire_work, retire_work_handler); in intel_gt_init_requests() 222 cancel_delayed_work(>->requests.retire_work); in intel_gt_park_requests() 227 schedule_delayed_work(>->requests.retire_work, in intel_gt_unpark_requests() 234 cancel_delayed_work_sync(>->requests.retire_work); in intel_gt_fini_requests()
|
/linux-6.3-rc2/Documentation/virt/kvm/ |
A D | vcpu-requests.rst | 14 /* Check if any requests are pending for VCPU @vcpu. */ 40 as possible after making the request. This means most requests 96 VCPU requests are simply bit indices of the ``vcpu->requests`` bitmap. 100 clear_bit(KVM_REQ_UNBLOCK & KVM_REQUEST_MASK, &vcpu->requests); 105 dependent requests. 152 This flag is applied to requests that only need immediate attention 154 to be awaken for these requests. Sleeping VCPUs will handle the 155 requests when they are awaken later for some other reason. 165 Acknowledgements" for more information about requests with 188 When making requests to VCPUs, we want to avoid the receiving VCPU [all …]
|
/linux-6.3-rc2/Documentation/filesystems/ |
A D | virtiofs.rst | 58 Since the virtio-fs device uses the FUSE protocol for file system requests, the 64 FUSE requests are placed into a virtqueue and processed by the host. The 71 prioritize certain requests over others. Virtqueues have queue semantics and 72 it is not possible to change the order of requests that have been enqueued. 74 impossible to add high priority requests. In order to address this difference, 75 the virtio-fs device uses a "hiprio" virtqueue specifically for requests that 76 have priority over normal requests.
|
A D | gfs2-glocks.rst | 19 The gl_holders list contains all the queued lock requests (not 164 1. DLM lock time (non-blocking requests) 165 2. DLM lock time (blocking requests) 170 currently means any requests when (a) the current state of 174 lock requests. 177 how many lock requests have been made, and thus how much data 181 of dlm lock requests issued. 199 the average time between lock requests for a glock means we 226 srtt Smoothed round trip time for non blocking dlm requests 230 sirt Smoothed inter request time (for dlm requests) [all …]
|
/linux-6.3-rc2/drivers/gpu/drm/i915/gt/uc/ |
A D | intel_guc_ct.c | 93 spin_lock_init(&ct->requests.lock); in intel_guc_ct_init_early() 94 INIT_LIST_HEAD(&ct->requests.pending); in intel_guc_ct_init_early() 95 INIT_LIST_HEAD(&ct->requests.incoming); in intel_guc_ct_init_early() 350 return ++ct->requests.last_fence; in ct_get_next_fence() 682 spin_lock(&ct->requests.lock); in ct_send() 684 spin_unlock(&ct->requests.lock); in ct_send() 736 spin_lock_irqsave(&ct->requests.lock, flags); in ct_send() 934 spin_lock_irqsave(&ct->requests.lock, flags); in ct_handle_response() 957 ct->requests.last_fence); in ct_handle_response() 1045 spin_lock_irqsave(&ct->requests.lock, flags); in ct_process_incoming_requests() [all …]
|
/linux-6.3-rc2/arch/powerpc/kvm/ |
A D | trace.h | 106 __field( __u32, requests ) 111 __entry->requests = vcpu->requests; 115 __entry->cpu_nr, __entry->requests)
|
/linux-6.3-rc2/Documentation/ABI/stable/ |
A D | sysfs-bus-xen-backend | 39 Number of flush requests from the frontend. 46 Number of requests delayed because the backend was too 47 busy processing previous requests. 54 Number of read requests from the frontend. 68 Number of write requests from the frontend.
|
/linux-6.3-rc2/Documentation/ABI/testing/ |
A D | sysfs-class-scsi_tape | 33 The number of I/O requests issued to the tape drive other 34 than SCSI read/write requests. 54 Shows the total number of read requests issued to the tape 65 read I/O requests to complete. 85 Shows the total number of write requests issued to the tape 96 write I/O requests to complete.
|
/linux-6.3-rc2/Documentation/admin-guide/device-mapper/ |
A D | log-writes.rst | 10 that is in the WRITE requests is copied into the log to make the replay happen 17 cache. This means that normal WRITE requests are not actually logged until the 22 This works by attaching all WRITE requests to a list once the write completes. 39 Any REQ_FUA requests bypass this flushing mechanism and are logged as soon as 40 they complete as those requests will obviously bypass the device cache. 42 Any REQ_OP_DISCARD requests are treated like WRITE requests. Otherwise we would 43 have all the DISCARD requests, and then the WRITE requests and then the FLUSH
|
/linux-6.3-rc2/Documentation/scsi/ |
A D | hptiop.rst | 110 All queued requests are handled via inbound/outbound queue port. 125 - Post the packet to IOP by writing it to inbound queue. For requests 127 requests allocated in host memory, write (0x80000000|(bus_addr>>5)) 134 For requests allocated in IOP memory, the request offset is posted to 137 For requests allocated in host memory, (0x80000000|(bus_addr>>5)) 144 For requests allocated in IOP memory, the host driver free the request 147 Non-queued requests (reset/flush etc) can be sent via inbound message 155 All queued requests are handled via inbound/outbound list. 169 round to 0 if the index reaches the supported count of requests. 186 Non-queued requests (reset communication/reset/flush etc) can be sent via PCIe
|
/linux-6.3-rc2/Documentation/mm/ |
A D | balance.rst | 14 allocation requests that have order-0 fallback options. In such cases, 17 __GFP_IO allocation requests are made to prevent file system deadlocks. 19 In the absence of non sleepable allocation requests, it seems detrimental 24 That being said, the kernel should try to fulfill requests for direct 26 the dma pool, so as to keep the dma pool filled for dma requests (atomic 29 regular memory requests by allocating one from the dma pool, instead 74 probably because all allocation requests are coming from intr context 88 watermark[WMARK_HIGH]. When low_on_memory is set, page allocation requests will 97 1. Dynamic experience should influence balancing: number of failed requests
|
/linux-6.3-rc2/Documentation/driver-api/firmware/ |
A D | request_firmware.rst | 12 Synchronous firmware requests 15 Synchronous firmware requests will wait until the firmware is found or until 43 Asynchronous firmware requests 46 Asynchronous firmware requests allow driver code to not have to wait
|
/linux-6.3-rc2/Documentation/hid/ |
A D | hid-transport.rst | 108 events or answers to host requests on this channel. 112 SET_REPORT requests. 123 to device and may include LED requests, rumble requests or more. Output 131 Feature reports are never sent without requests. A host must explicitly set 142 channel provides synchronous GET/SET_REPORT requests. Plain reports are only 150 simultaneous GET_REPORT requests. 159 GET_REPORT requests can be sent for any of the 3 report types and shall 173 multiple synchronous SET_REPORT requests. 175 Other ctrl-channel requests are supported by USB-HID but are not available 310 it to wait for any pending requests to complete if only one request is [all …]
|
/linux-6.3-rc2/drivers/media/v4l2-core/ |
A D | v4l2-ctrls-request.c | 21 INIT_LIST_HEAD(&hdl->requests); in v4l2_ctrl_handler_init_request() 39 if (hdl->req_obj.ops || list_empty(&hdl->requests)) in v4l2_ctrl_handler_free_request() 47 list_for_each_entry_safe(req, next_req, &hdl->requests, requests) { in v4l2_ctrl_handler_free_request() 102 list_del_init(&hdl->requests); in v4l2_ctrl_request_unbind() 163 list_add_tail(&hdl->requests, &from->requests); in v4l2_ctrl_request_bind()
|
/linux-6.3-rc2/drivers/gpu/drm/i915/gem/ |
A D | i915_gem_execbuffer.c | 1983 if (eb->requests[i]) in eb_find_first_request_added() 1984 return eb->requests[i]; in eb_find_first_request_added() 2137 if (!eb->requests[j]) in eb_move_to_gpu() 2185 if (!eb->requests[j]) in eb_move_to_gpu() 2439 if (!eb->requests[i]) in eb_submit() 3164 if (!eb->requests[i]) in eb_requests_get() 3176 if (!eb->requests[i]) in eb_requests_put() 3304 eb->requests[i] = NULL; in eb_requests_create() 3326 eb->requests[i]->batch_res = in eb_requests_create() 3331 eb->requests[i]); in eb_requests_create() [all …]
|