Lines Matching full:cpus
25 queues to distribute processing among CPUs. The NIC distributes packets by
28 queue, which in turn can be processed by separate CPUs. This mechanism is
55 one for each memory domain, where a memory domain is a set of CPUs that
75 to spread receive interrupts between CPUs. To manually adjust the IRQ
83 interrupt processing forms a bottleneck. Spreading load between CPUs
85 is to allocate as many queues as there are CPUs in the system (or the
128 Each receive hardware queue has an associated list of CPUs to which
133 the end of the bottom half routine, IPIs are sent to any CPUs for which
142 explicitly configured. The list of CPUs to which RPS may forward traffic
147 This file implements a bitmap of CPUs. RPS is disabled when it is zero
149 CPU. Documentation/IRQ-affinity.txt explains how CPUs are assigned to
155 the rps_cpus to the CPUs in the same memory domain of the interrupting
156 CPU. If NUMA locality is not an issue, this could also be all CPUs in
162 and unnecessary. If there are fewer hardware queues than CPUs, then
181 flows to the CPUs where those flows are being processed. The flow hash
186 same CPU. Indeed, with many flows and few CPUs, it is very likely that
313 exclusively to a subset of CPUs, where the transmit completions for
316 significantly reduced since fewer CPUs contend for the same queue
322 XPS is configured per transmit queue by setting a bitmap of CPUs that
323 may use that queue to transmit. The reverse mapping, from CPUs to
349 configured. To enable XPS, the bitmap of CPUs that may use a transmit
359 If there are as many queues as there are CPUs in the system, then each
361 experience no contention. If there are fewer queues than CPUs, then the
362 best CPUs to share a given queue are probably those that share the cache