/linux-5.10/Documentation/networking/device_drivers/ethernet/huawei/ |
D | hinic.rst | 55 Asynchronous Event Queues(AEQs) - The event queues for receiving messages from 69 Completion Event Queues(CEQs) - The completion Event Queues that describe IO 72 Work Queues(WQ) - Contain the memory and operations for use by CMD queues and 77 Command Queues(CMDQ) - The queues for sending commands for IO management and is 82 Queue Pairs(QPs) - The HW Receive and Send queues for Receiving and Transmitting 104 Tx Queues - Logical Tx Queues that use the HW Send Queues for transmit. 108 Rx Queues - Logical Rx Queues that use the HW Receive Queues for receive. 112 hinic_dev - de/constructs the Logical Tx and Rx Queues.
|
/linux-5.10/net/sched/ |
D | sch_multiq.c | 25 struct Qdisc **queues; member 54 return q->queues[0]; in multiq_classify() 56 return q->queues[band]; in multiq_classify() 105 qdisc = q->queues[q->curband]; in multiq_dequeue() 137 qdisc = q->queues[curband]; in multiq_peek() 154 qdisc_reset(q->queues[band]); in multiq_reset() 167 qdisc_put(q->queues[band]); in multiq_destroy() 169 kfree(q->queues); in multiq_destroy() 197 if (q->queues[i] != &noop_qdisc) { in multiq_tune() 198 struct Qdisc *child = q->queues[i]; in multiq_tune() [all …]
|
D | sch_prio.c | 26 struct Qdisc *queues[TCQ_PRIO_BANDS]; member 57 return q->queues[q->prio2band[band & TC_PRIO_MAX]]; in prio_classify() 63 return q->queues[q->prio2band[0]]; in prio_classify() 65 return q->queues[band]; in prio_classify() 103 struct Qdisc *qdisc = q->queues[prio]; in prio_peek() 117 struct Qdisc *qdisc = q->queues[prio]; in prio_dequeue() 137 qdisc_reset(q->queues[prio]); in prio_reset() 175 qdisc_put(q->queues[prio]); in prio_destroy() 182 struct Qdisc *queues[TCQ_PRIO_BANDS]; in prio_tune() local 200 queues[i] = qdisc_create_dflt(sch->dev_queue, &pfifo_qdisc_ops, in prio_tune() [all …]
|
/linux-5.10/Documentation/ABI/testing/ |
D | sysfs-class-net-queues | 1 What: /sys/class/<iface>/queues/rx-<queue>/rps_cpus 11 What: /sys/class/<iface>/queues/rx-<queue>/rps_flow_cnt 19 What: /sys/class/<iface>/queues/tx-<queue>/tx_timeout 27 What: /sys/class/<iface>/queues/tx-<queue>/tx_maxrate 35 What: /sys/class/<iface>/queues/tx-<queue>/xps_cpus 45 What: /sys/class/<iface>/queues/tx-<queue>/xps_rxqs 56 What: /sys/class/<iface>/queues/tx-<queue>/byte_queue_limits/hold_time 65 What: /sys/class/<iface>/queues/tx-<queue>/byte_queue_limits/inflight 73 What: /sys/class/<iface>/queues/tx-<queue>/byte_queue_limits/limit 82 What: /sys/class/<iface>/queues/tx-<queue>/byte_queue_limits/limit_max [all …]
|
/linux-5.10/Documentation/devicetree/bindings/soc/ti/ |
D | keystone-navigator-qmss.txt | 9 management of the packet queues. Packets are queued/de-queued by writing or 32 -- managed-queues : the actual queues managed by each queue manager 33 instance, specified as <"base queue #" "# of queues">. 51 - qpend : pool of qpend(interruptible) queues 52 - general-purpose : pool of general queues, primarily used 53 as free descriptor queues or the 54 transmit DMA queues. 55 - accumulator : pool of queues on PDSP accumulator channel 57 -- qrange : number of queues to use per queue range, specified as 58 <"base queue #" "# of queues">. [all …]
|
/linux-5.10/Documentation/block/ |
D | blk-mq.rst | 37 spawns multiple queues with individual entry points local to the CPU, removing 49 blk-mq has two group of queues: software staging queues and hardware dispatch 50 queues. When the request arrives at the block layer, it will try the shortest 56 Then, after the requests are processed by software queues, they will be placed 62 Software staging queues 65 The block IO subsystem adds requests in the software staging queues 71 the number of queues is defined by a per-CPU or per-node basis. 93 requests from different queues, otherwise there would be cache trashing and a 99 queue (a.k.a. run the hardware queue), the software queues mapped to that 102 Hardware dispatch queues [all …]
|
/linux-5.10/Documentation/networking/ |
D | scaling.rst | 27 Contemporary NICs support multiple receive and transmit descriptor queues 29 queues to distribute processing among CPUs. The NIC distributes packets by 47 Some advanced NICs allow steering packets to queues based on 57 module parameter for specifying the number of hardware queues to 60 for each CPU if the device supports enough queues, or otherwise at least 66 default mapping is to distribute the queues evenly in the table, but the 69 indirection table could be done to give different queues different 80 of queues to IRQs can be determined from /proc/interrupts. By default, 95 is to allocate as many queues as there are CPUs in the system (or the 97 is likely the one with the smallest number of receive queues where no [all …]
|
D | multiqueue.rst | 18 the subqueue memory, as well as netdev configuration of where the queues 21 The base driver will also need to manage the queues as it does the global 33 A new round-robin qdisc, sch_multiq also supports multiple hardware queues. The 35 bands and queues based on the value in skb->queue_mapping. Use this field in 42 On qdisc load, the number of bands is based on the number of queues on the 56 The qdisc will allocate the number of bands to equal the number of queues that 58 queues, the band mapping would look like::
|
/linux-5.10/drivers/staging/wfx/ |
D | queue.c | 234 struct wfx_queue *queues[IEEE80211_NUM_ACS * ARRAY_SIZE(wdev->vif)]; in wfx_tx_queues_get_skb() local 240 // sort the queues in wfx_tx_queues_get_skb() 244 WARN_ON(num_queues >= ARRAY_SIZE(queues)); in wfx_tx_queues_get_skb() 245 queues[num_queues] = &wvif->tx_queue[i]; in wfx_tx_queues_get_skb() 247 if (wfx_tx_queue_get_weight(queues[j]) < in wfx_tx_queues_get_skb() 248 wfx_tx_queue_get_weight(queues[j - 1])) in wfx_tx_queues_get_skb() 249 swap(queues[j - 1], queues[j]); in wfx_tx_queues_get_skb() 259 skb = skb_dequeue(&queues[i]->cab); in wfx_tx_queues_get_skb() 267 WARN_ON(queues[i] != in wfx_tx_queues_get_skb() 269 atomic_inc(&queues[i]->pending_frames); in wfx_tx_queues_get_skb() [all …]
|
/linux-5.10/Documentation/arm/keystone/ |
D | knav-qmss.rst | 15 management of the packet queues. Packets are queued/de-queued by writing or 24 knav qmss driver provides a set of APIs to drivers to open/close qmss queues, 25 allocate descriptor pools, map the descriptors, push/pop to queues etc. For 31 Accumulator QMSS queues using PDSP firmware 34 queue or multiple contiguous queues. drivers/soc/ti/knav_qmss_acc.c is the 37 1 or 32 queues per channel. More description on the firmware is available in 56 Use of accumulated queues requires the firmware image to be present in the 57 file system. The driver doesn't acc queues to the supported queue range if
|
/linux-5.10/drivers/scsi/snic/ |
D | vnic_resource.h | 27 RES_TYPE_WQ, /* Work queues */ 28 RES_TYPE_RQ, /* Receive queues */ 29 RES_TYPE_CQ, /* Completion queues */ 45 RES_TYPE_MQ_WQ, /* MQ Work queues */ 46 RES_TYPE_MQ_RQ, /* MQ Receive queues */ 47 RES_TYPE_MQ_CQ, /* MQ Completion queues */
|
/linux-5.10/drivers/scsi/fnic/ |
D | vnic_resource.h | 27 RES_TYPE_WQ, /* Work queues */ 28 RES_TYPE_RQ, /* Receive queues */ 29 RES_TYPE_CQ, /* Completion queues */ 45 RES_TYPE_MQ_WQ, /* MQ Work queues */ 46 RES_TYPE_MQ_RQ, /* MQ Receive queues */ 47 RES_TYPE_MQ_CQ, /* MQ Completion queues */
|
/linux-5.10/drivers/scsi/aacraid/ |
D | comminit.c | 237 * Fill in addresses of the Comm Area Headers and Queues in aac_alloc_comm() 373 struct aac_entry * queues; in aac_comm_init() local 375 struct aac_queue_block * comm = dev->queues; in aac_comm_init() 394 queues = (struct aac_entry *)(((ulong)headers) + hdrsize); in aac_comm_init() 397 comm->queue[HostNormCmdQueue].base = queues; in aac_comm_init() 399 queues += HOST_NORM_CMD_ENTRIES; in aac_comm_init() 403 comm->queue[HostHighCmdQueue].base = queues; in aac_comm_init() 406 queues += HOST_HIGH_CMD_ENTRIES; in aac_comm_init() 410 comm->queue[AdapNormCmdQueue].base = queues; in aac_comm_init() 413 queues += ADAP_NORM_CMD_ENTRIES; in aac_comm_init() [all …]
|
/linux-5.10/drivers/nvme/target/ |
D | loop.c | 30 struct nvme_loop_queue *queues; member 71 return queue - queue->ctrl->queues; in nvme_loop_queue_idx() 176 struct nvme_loop_queue *queue = &ctrl->queues[0]; in nvme_loop_submit_async_event() 198 iod->queue = &ctrl->queues[queue_idx]; in nvme_loop_init_iod() 218 struct nvme_loop_queue *queue = &ctrl->queues[hctx_idx + 1]; in nvme_loop_init_hctx() 230 struct nvme_loop_queue *queue = &ctrl->queues[0]; in nvme_loop_init_admin_hctx() 254 clear_bit(NVME_LOOP_Q_LIVE, &ctrl->queues[0].flags); in nvme_loop_destroy_admin_queue() 255 nvmet_sq_destroy(&ctrl->queues[0].nvme_sq); in nvme_loop_destroy_admin_queue() 276 kfree(ctrl->queues); in nvme_loop_free_ctrl() 287 clear_bit(NVME_LOOP_Q_LIVE, &ctrl->queues[i].flags); in nvme_loop_destroy_io_queues() [all …]
|
/linux-5.10/drivers/net/ethernet/intel/ixgbe/ |
D | ixgbe_lib.c | 215 /* FCoE uses a linear block of queues so just assigning 1:1 */ in ixgbe_cache_ring_sriov() 236 /* FCoE uses a linear block of queues so just assigning 1:1 */ in ixgbe_cache_ring_sriov() 313 * ixgbe_set_dcb_sriov_queues: Allocate queues for SR-IOV devices w/ DCB 316 * When SR-IOV (Single Root IO Virtualiztion) is enabled, allocate queues 317 * and VM pools where appropriate. Also assign queues based on DCB 339 /* limit VMDq instances on the PF by number of Tx queues */ in ixgbe_set_dcb_sriov_queues() 356 /* queues in the remaining pools are available for FCoE */ in ixgbe_set_dcb_sriov_queues() 394 /* alloc queues for FCoE separately */ in ixgbe_set_dcb_sriov_queues() 398 /* add queues to adapter */ in ixgbe_set_dcb_sriov_queues() 428 /* Map queue offset and counts onto allocated tx queues */ in ixgbe_set_dcb_queues() [all …]
|
/linux-5.10/drivers/net/ethernet/cisco/enic/ |
D | vnic_resource.h | 34 RES_TYPE_WQ, /* Work queues */ 35 RES_TYPE_RQ, /* Receive queues */ 36 RES_TYPE_CQ, /* Completion queues */ 52 RES_TYPE_MQ_WQ, /* MQ Work queues */ 53 RES_TYPE_MQ_RQ, /* MQ Receive queues */ 54 RES_TYPE_MQ_CQ, /* MQ Completion queues */
|
/linux-5.10/Documentation/devicetree/bindings/misc/ |
D | intel,ixp4xx-ahb-queue-manager.yaml | 14 The IXP4xx AHB Queue Manager maintains queues as circular buffers in 17 IXP4xx for accelerating queues, especially for networking. Clients pick 18 queues from the queue manager with foo-queue = <&qmgr N> where the 33 - description: Interrupt for queues 0-31 34 - description: Interrupt for queues 32-63
|
/linux-5.10/tools/perf/util/ |
D | arm-spe.c | 40 struct auxtrace_queues queues; member 141 queue = &speq->spe->queues.queue_array[speq->queue_nr]; in arm_spe_get_trace() 437 for (i = 0; i < spe->queues.nr_queues; i++) { in arm_spe__setup_queues() 438 ret = arm_spe__setup_queue(spe, &spe->queues.queue_array[i], i); in arm_spe__setup_queues() 448 if (spe->queues.new_data) { in arm_spe__update_queues() 449 spe->queues.new_data = false; in arm_spe__update_queues() 516 queue = &spe->queues.queue_array[queue_nr]; in arm_spe_process_queues() 552 struct auxtrace_queues *queues = &spe->queues; in arm_spe_process_timeless_queues() local 556 for (i = 0; i < queues->nr_queues; i++) { in arm_spe_process_timeless_queues() 557 struct auxtrace_queue *queue = &spe->queues.queue_array[i]; in arm_spe_process_timeless_queues() [all …]
|
D | intel-bts.c | 46 struct auxtrace_queues queues; member 211 for (i = 0; i < bts->queues.nr_queues; i++) { in intel_bts_setup_queues() 212 ret = intel_bts_setup_queue(bts, &bts->queues.queue_array[i], in intel_bts_setup_queues() 222 if (bts->queues.new_data) { in intel_bts_update_queues() 223 bts->queues.new_data = false; in intel_bts_update_queues() 465 queue = &btsq->bts->queues.queue_array[btsq->queue_nr]; in intel_bts_process_queue() 539 struct auxtrace_queues *queues = &bts->queues; in intel_bts_process_tid_exit() local 542 for (i = 0; i < queues->nr_queues; i++) { in intel_bts_process_tid_exit() 543 struct auxtrace_queue *queue = &bts->queues.queue_array[i]; in intel_bts_process_tid_exit() 568 queue = &bts->queues.queue_array[queue_nr]; in intel_bts_process_queues() [all …]
|
/linux-5.10/include/linux/avf/ |
D | virtchnl.h | 29 * have a maximum of sixteen queues for all of its VSIs. 38 * queues and interrupts. After these operations are complete, the VF 39 * driver may start its queues, optionally add MAC and VLAN filters, and 198 * When reset is complete, PF must ensure that all queues in all VSIs associated 314 * VF sends this message to set parameters for all active TX and RX queues 316 * PF configures queues and returns status. 317 * If the number of queues specified is greater than the number of queues 318 * associated with the VSI, an error is returned and no queues are configured. 321 /* NOTE: vsi_id and queue_id should be identical for both queues. */ 338 * VF sends this message to request the PF to allocate additional queues to [all …]
|
/linux-5.10/drivers/net/ethernet/chelsio/cxgb3/ |
D | firmware_exports.h | 98 /* FW_TUNNEL_NUM corresponds to the number of supported TUNNEL Queues. These 99 * queues must start at SGE Egress Context FW_TUNNEL_SGEEC_START and must 109 /* FW_CTRL_NUM corresponds to the number of supported CTRL Queues. These queues 119 /* FW_OFLD_NUM corresponds to the number of supported OFFLOAD Queues. These 120 * queues must start at SGE Egress Context FW_OFLD_SGEEC_START. 123 * OFFLOAD Queues, as the host is responsible for providing the correct TID in
|
/linux-5.10/Documentation/networking/device_drivers/ethernet/freescale/ |
D | dpaa.rst | 86 Tx FQs transmission frame queues 143 confirmation frame queues. The driver is then responsible for freeing the 164 strict priority levels. Each traffic class contains NR_CPU TX queues. By 165 default, only one traffic class is enabled and the lowest priority Tx queues 184 Traffic coming on the DPAA Rx queues or on the DPAA Tx confirmation 185 queues is seen by the CPU as ingress traffic on a certain portal. 191 hardware frame queues using a hash on IP v4/v6 source and destination 195 queues are configured to put the received traffic into a pool channel 197 The default frame queues have the HOLDACTIVE option set, ensuring that 204 128 Rx frame queues that are configured to dedicated channels, in a [all …]
|
/linux-5.10/drivers/gpu/drm/amd/amdkfd/ |
D | kfd_device_queue_manager.h | 51 * @exeute_queues: Dispatches the queues list to the H/W. 59 * @start: Initializes the resources/modules the the device needs for queues 76 * @process_termination: Clears all process queues belongs to that device. 78 * @evict_process_queues: Evict all active queues of a process 80 * @restore_process_queues: Restore all evicted queues queues of a process 163 * This struct is a base class for the kfd queues scheduler in the 167 * concrete device. This class is the only class in the queues scheduler 180 struct list_head queues; member
|
/linux-5.10/drivers/staging/qlge/ |
D | TODO | 13 * rename "rx" queues to "completion" queues. Calling tx completion queues "rx 14 queues" is confusing. 24 frames, resets the link, device and driver buffer queues become
|
/linux-5.10/arch/mips/include/asm/octeon/ |
D | cvmx-cmd-queue.h | 30 * Support functions for managing command queues used for 50 * called "cvmx_cmd_queues". Except for the PKO queues, each 52 * contention on spin locks. The PKO queues are stored such that 54 * allows for queues being in separate cache lines when there 55 * are low number of queues per port. With 16 queues per port, 57 * second queues for each port are in another area, etc. This 59 * 16 queues per port using a minimum of cache lines per core. 60 * All queues for a given core will be isolated in the same 65 * queues. The lock uses a "ticket / now serving" model to 93 * queues. Each hardware block has up to 65536 sub identifiers for [all …]
|