/linux-6.15/Documentation/devicetree/bindings/ |
D | numa.txt | 2 NUMA binding description. 9 Systems employing a Non Uniform Memory Access (NUMA) architecture contain 11 that comprise what is commonly known as a NUMA node. 12 Processor accesses to memory within the local NUMA node is generally faster 13 than processor accesses to memory outside of the local NUMA node. 14 DT defines interfaces that allow the platform to convey NUMA node 18 2 - numa-node-id 21 For the purpose of identification, each NUMA node is associated with a unique 25 A device node is associated with a NUMA node by the presence of a 26 numa-node-id property which contains the node id of the device. [all …]
|
/linux-6.15/Documentation/arch/powerpc/ |
D | associativity.rst | 2 NUMA resource associativity 10 characteristic is presented in terms of NUMA node distance within the Linux kernel. 24 Form 0 associativity supports only two NUMA distances (LOCAL and REMOTE). 29 device tree properties are used to determine the NUMA distance between resource groups/domains. 41 Linux kernel uses the domainID at the primary domainID index as the NUMA node id. 42 Linux kernel computes NUMA distance between two domains by recursively comparing 44 level of the resource group, the kernel doubles the NUMA distance between the 49 Form 2 associativity format adds separate device tree properties representing NUMA node distance 51 domain numbering. With numa distance computation now detached from the index value in 59 "ibm,numa-lookup-index-table" property contains a list of one or more numbers representing [all …]
|
/linux-6.15/Documentation/mm/ |
D | numa.rst | 4 What is NUMA? 10 From the hardware perspective, a NUMA system is a computer platform that 19 may not be populated on any given cell. The cells of the NUMA system are 21 point-to-point link are common types of NUMA system interconnects. Both of 22 these types of interconnects can be aggregated to create NUMA platforms with 25 For Linux, the NUMA platforms of interest are primarily what is known as Cache 26 Coherent NUMA or ccNUMA systems. With ccNUMA systems, all memory is visible 34 bandwidths than accesses to memory on other, remote cells. NUMA platforms 37 Platform vendors don't build NUMA systems just to make software developers' 44 This leads to the Linux software view of a NUMA system: [all …]
|
/linux-6.15/tools/perf/pmu-events/arch/x86/amdzen5/ |
D | load-store.json | 108 …": "Demand data cache fills from cache of another CCX when the address was in the same NUMA node.", 114 "BriefDescription": "Demand data cache fills from either DRAM or MMIO in the same NUMA node.", 120 …"Demand data cache fills from cache of another CCX when the address was in a different NUMA node.", 126 … cache fills from cache of another CCX when the address was in the same or a different NUMA node.", 132 …"BriefDescription": "Demand data cache fills from either DRAM or MMIO in a different NUMA node (sa… 138 …mand data cache fills from either DRAM or MMIO in the same or a different NUMA node (same or diffe… 144 …er cache of another CCX, DRAM or MMIO when the address was in a different NUMA node (same or diffe… 180 …ion": "Any data cache fills from cache of another CCX when the address was in the same NUMA node.", 186 "BriefDescription": "Any data cache fills from either DRAM or MMIO in the same NUMA node.", 192 …": "Any data cache fills from cache of another CCX when the address was in a different NUMA node.", [all …]
|
D | l3-cache.json | 26 …"BriefDescription": "Average sampled latency when data is sourced from DRAM in the same NUMA node.… 37 …"BriefDescription": "Average sampled latency when data is sourced from DRAM in a different NUMA no… 48 …latency when data is sourced from another CCX's cache when the address was in the same NUMA node.", 59 …ency when data is sourced from another CCX's cache when the address was in a different NUMA node.", 70 … "Average sampled latency when data is sourced from extension memory (CXL) in the same NUMA node.", 81 …verage sampled latency when data is sourced from extension memory (CXL) in a different NUMA node.", 103 "BriefDescription": "L3 cache fill requests sourced from DRAM in the same NUMA node.", 114 "BriefDescription": "L3 cache fill requests sourced from DRAM in a different NUMA node.", 125 … cache fill requests sourced from another CCX's cache when the address was in the same NUMA node.", 136 …che fill requests sourced from another CCX's cache when the address was in a different NUMA node.", [all …]
|
/linux-6.15/drivers/of/ |
D | of_numa.c | 3 * OF NUMA Parsing support. 8 #define pr_fmt(fmt) "OF: NUMA: " fmt 15 #include <asm/numa.h> 18 * Even though we connect cpus to numa domains later in SMP 28 r = of_property_read_u32(np, "numa-node-id", &nid); in of_numa_parse_cpu_nodes() 48 r = of_property_read_u32(np, "numa-node-id", &nid); in of_numa_parse_memory_nodes() 53 * "numa-node-id" property in of_numa_parse_memory_nodes() 81 pr_info("parsing numa-distance-map-v1\n"); in of_numa_parse_distance_map_v1() 130 "numa-distance-map-v1"); in of_numa_parse_distance_map() 147 r = of_property_read_u32(np, "numa-node-id", &nid); in of_node_to_nid() [all …]
|
/linux-6.15/tools/testing/selftests/sched_ext/ |
D | numa.c | 9 #include "numa.bpf.skel.h" 14 struct numa *skel; in setup() 30 struct numa *skel = ctx; in run() 47 struct numa *skel = ctx; in cleanup() 52 struct scx_test numa = { variable 53 .name = "numa", 54 .description = "Verify NUMA-aware functionalities", 59 REGISTER_SCX_TEST(&numa)
|
/linux-6.15/kernel/sched/ |
D | ext_idle.c | 24 /* Enable/disable NUMA aware optimizations */ 28 * cpumasks to track idle CPUs within each NUMA node. 60 * Returns the NUMA node ID associated with a @cpu, or NUMA_NO_NODE if 107 * Pick an idle CPU in a specific NUMA node. 162 * This loop is O(N^2), with N being the amount of NUMA nodes, in pick_idle_cpu_from_online_nodes() 163 * which might be quite expensive in large NUMA systems. However, in pick_idle_cpu_from_online_nodes() 166 * without specifying a target NUMA node, so it shouldn't be a in pick_idle_cpu_from_online_nodes() 244 * Return the amount of CPUs in the same NUMA domain of @cpu (or zero if the 245 * NUMA domain is not defined). 263 * Return the cpumask representing the NUMA domain of @cpu (or NULL if the NUMA [all …]
|
/linux-6.15/tools/perf/pmu-events/arch/x86/amdzen4/ |
D | cache.json | 35 …": "Demand data cache fills from cache of another CCX when the address was in the same NUMA node.", 41 "BriefDescription": "Demand data cache fills from either DRAM or MMIO in the same NUMA node.", 47 …"Demand data cache fills from cache of another CCX when the address was in a different NUMA node.", 53 …"BriefDescription": "Demand data cache fills from either DRAM or MMIO in a different NUMA node (sa… 89 …ion": "Any data cache fills from cache of another CCX when the address was in the same NUMA node.", 95 "BriefDescription": "Any data cache fills from either DRAM or MMIO in the same NUMA node.", 101 …": "Any data cache fills from cache of another CCX when the address was in a different NUMA node.", 107 … cache fills from cache of another CCX when the address was in the same or a different NUMA node.", 113 …"BriefDescription": "Any data cache fills from either DRAM or MMIO in a different NUMA node (same … 119 …"BriefDescription": "Any data cache fills from either DRAM or MMIO in any NUMA node (same or diffe… [all …]
|
/linux-6.15/Documentation/arch/x86/x86_64/ |
D | fake-numa-for-cpusets.rst | 4 Fake NUMA For CPUSets 9 Using numa=fake and CPUSets for Resource Management 11 This document describes how the numa=fake x86_64 command-line option can be used 13 you can create fake NUMA nodes that represent contiguous chunks of memory and 20 more information on the numa=fake command line option and its various ways of 23 For the purposes of this introduction, we'll assume a very primitive NUMA 24 emulation setup of "numa=fake=4*512,". This will split our system memory into 30 A machine may be split as follows with "numa=fake=4*512," as reported by dmesg:: 65 case (i.e. running the same 'dd' command without assigning it to a fake NUMA
|
/linux-6.15/Documentation/admin-guide/ |
D | nvme-multipath.rst | 27 one. Current the NVMe multipath policies include numa(default), round-robin and 35 NUMA section in Policies 38 The NUMA policy selects the path closest to the NUMA node of the current CPU for 39 I/O distribution. This policy maintains the nearest paths to each NUMA node 42 When to use the NUMA policy: 44 multi-processor systems, especially under NUMA architecture.
|
/linux-6.15/drivers/base/ |
D | arch_numa.c | 3 * NUMA support, based on the x86 implementation. 9 #define pr_fmt(fmt) "NUMA: " fmt 34 early_param("numa", numa_parse_early_param); 135 * We should set the numa node of cpu0 as soon as possible, because it in early_map_cpu_to_node() 243 pr_info("No NUMA configuration found\n"); in numa_init() 261 * dummy_numa_init() - Fallback dummy NUMA init 263 * Used if there's no underlying NUMA architecture, NUMA initialization 264 * fails, or NUMA is disabled on the command line. 278 pr_info("NUMA disabled\n"); /* Forced off on command line. */ in dummy_numa_init() 283 pr_err("NUMA init failed\n"); in dummy_numa_init() [all …]
|
/linux-6.15/Documentation/ABI/testing/ |
D | sysfs-kernel-mm-numa | 1 What: /sys/kernel/mm/numa/ 4 Description: Interface for NUMA 6 What: /sys/kernel/mm/numa/demotion_enabled 14 characteristics instead of plain NUMA systems where 19 is performed before swap. It may move data to a NUMA
|
/linux-6.15/arch/x86/mm/ |
D | numa.c | 2 /* Common code for 32 and 64-bit NUMA */ 41 early_param("numa", numa_setup); 196 * dummy_numa_init - Fallback dummy NUMA init 198 * Used if there's no underlying NUMA architecture, NUMA initialization 199 * fails, or NUMA is disabled on the command line. 207 numa_off ? "NUMA turned off" : "No NUMA configuration found"); in dummy_numa_init() 218 * x86_numa_init - Initialize NUMA 220 * Try each configured NUMA initialization method until one succeeds. The 278 * This means we skip cpu_to_node[] initialisation for NUMA 280 * for NUMA on a non NUMA box), which is OK as cpu_to_node[] [all …]
|
/linux-6.15/arch/arm64/boot/dts/hisilicon/ |
D | hip07.dtsi | 273 numa-node-id = <0>; 282 numa-node-id = <0>; 291 numa-node-id = <0>; 300 numa-node-id = <0>; 309 numa-node-id = <0>; 318 numa-node-id = <0>; 327 numa-node-id = <0>; 336 numa-node-id = <0>; 345 numa-node-id = <0>; 354 numa-node-id = <0>; [all …]
|
/linux-6.15/include/linux/sched/ |
D | sd_flags.h | 56 * SHARED_CHILD: Set from the base domain up to the NUMA reclaim level. 64 * SHARED_CHILD: Set from the base domain up to the NUMA reclaim level. 80 * SHARED_CHILD: Set from the base domain up to the NUMA reclaim level. 131 * SHARED_PARENT: Set for all NUMA levels above NODE. Could be set from a 149 * Set up until domains start spanning NUMA nodes. Close to being a SHARED_CHILD 159 * SHARED_PARENT: Set for all NUMA levels above NODE. 167 * SHARED_PARENT: Set for all NUMA levels above NODE.
|
/linux-6.15/mm/ |
D | numa_emulation.c | 3 * NUMA emulation 10 #include <asm/numa.h> 57 pr_err("NUMA: Too many emulated memblks, failing emulation\n"); in emu_setup_memblk() 99 pr_info("numa=fake=%d too large, reducing to %d\n", in split_nodes_interleave() 121 "NUMA emulation disabled.\n"); in split_nodes_interleave() 348 * numa_emulation - Emulate NUMA nodes 349 * @numa_meminfo: NUMA configuration to massage 350 * @numa_dist_cnt: The size of the physical NUMA distance table 352 * Emulate NUMA nodes according to the numa=fake kernel parameter. 364 * - NUMA distance table is rebuilt to represent distances between emulated [all …]
|
/linux-6.15/kernel/ |
D | Kconfig.hz | 12 beneficial for servers and NUMA systems that do not need to have 23 100 Hz is a typical choice for servers, SMP and NUMA systems 32 on SMP and NUMA systems. If you are going to be using NTSC video 40 on SMP and NUMA systems and exactly dividing by both PAL and
|
/linux-6.15/arch/x86/kernel/ |
D | setup_percpu.c | 50 * pcpu_need_numa - determine percpu allocation needs to consider NUMA 52 * If NUMA is not configured or there is only one NUMA node available, 53 * there is no reason to consider NUMA. This function determines 54 * whether percpu allocation should consider NUMA or not. 57 * true if NUMA should be considered; otherwise, false. 122 * however, on NUMA configurations, it can result in very in setup_per_cpu_areas()
|
/linux-6.15/tools/perf/pmu-events/arch/x86/icelakex/ |
D | other.json | 113 …fetches that were supplied by DRAM attached to this socket, unless in Sub NUMA Cluster(SNC) Mode. … 123 … on a distant memory controller of this socket when the system is in SNC (sub-NUMA cluster) mode.", 153 …a reads that were supplied by DRAM attached to this socket, unless in Sub NUMA Cluster(SNC) Mode. … 163 …ta reads that were supplied by PMM attached to this socket, unless in Sub NUMA Cluster(SNC) Mode. … 203 … on a distant memory controller of this socket when the system is in SNC (sub-NUMA cluster) mode.", 213 … on a distant memory controller of this socket when the system is in SNC (sub-NUMA cluster) mode.", 243 …FETCHW) that were supplied by DRAM attached to this socket, unless in Sub NUMA Cluster(SNC) Mode. … 253 …EFETCHW) that were supplied by PMM attached to this socket, unless in Sub NUMA Cluster(SNC) Mode. … 283 … on a distant memory controller of this socket when the system is in SNC (sub-NUMA cluster) mode.", 293 … on a distant memory controller of this socket when the system is in SNC (sub-NUMA cluster) mode.", [all …]
|
/linux-6.15/arch/powerpc/mm/ |
D | numa.c | 3 * pSeries NUMA support 7 #define pr_fmt(fmt) "numa: " fmt 98 * Modify node id, iff we started creating NUMA nodes in fake_numa_create_new_node() 188 * Returns nid in the range [0..nr_node_ids], or -1 if no useful NUMA 261 /* Double the distance for each NUMA level */ in __node_distance() 363 * With FORM2 we expect NUMA distance of all possible NUMA in update_numa_distance() 367 "NUMA distance details for node %d not provided\n", nid); in update_numa_distance() 372 * ibm,numa-lookup-index-table= {N, domainid1, domainid2, ..... domainidN} 373 * ibm,numa-distance-table = { N, 1, 2, 4, 5, 1, 6, .... N elements} 391 numa_lookup_index = of_get_property(root, "ibm,numa-lookup-index-table", NULL); in initialize_form2_numa_distance_lookup_table() [all …]
|
/linux-6.15/arch/riscv/kernel/ |
D | acpi_numa.c | 3 * ACPI 6.6 based NUMA setup for RISCV 18 #define pr_fmt(fmt) "ACPI: NUMA: " fmt 29 #include <asm/numa.h> 71 * so we do not need a NUMA mapping for it, skip in acpi_parse_rintc_pxm() 90 * In ACPI, SMP and CPU NUMA information is provided in separate in acpi_map_cpus_to_nodes()
|
/linux-6.15/arch/s390/include/asm/ |
D | numa.h | 3 * NUMA support for s390 5 * Declare the NUMA core code structures and functions. 15 #include <linux/numa.h>
|
/linux-6.15/Documentation/networking/ |
D | multi-pf-netdev.rst | 35 Passing traffic through different devices belonging to different NUMA sockets saves cross-NUMA 47 the correct close NUMA node when working on a certain app/CPU. 60 We distribute the channels between the different PFs to achieve local NUMA node performance 61 on multiple NUMA nodes. 136 NUMA node(s): 2 137 NUMA node0 CPU(s): 0-11 138 NUMA node1 CPU(s): 12-23
|
/linux-6.15/drivers/infiniband/hw/hfi1/ |
D | affinity.c | 9 #include <linux/numa.h> 30 /* Per NUMA node count of HFI devices */ 172 * Invalid PCI NUMA node information found, note it, and populate in node_affinity_init() 175 pr_err("HFI: Invalid PCI NUMA node. Performance may be affected\n"); in node_affinity_init() 507 * local NUMA node. in _dev_comp_vect_cpu_mask_init() 606 * If this is the first time this NUMA node's affinity is used, in hfi1_dev_affinity_init() 1036 * 1. Same NUMA node as HFI Y and not running an IRQ in hfi1_get_proc_affinity() 1038 * 2. Same NUMA node as HFI Y and running an IRQ handler in hfi1_get_proc_affinity() 1039 * 3. Different NUMA node to HFI Y and not running an IRQ in hfi1_get_proc_affinity() 1041 * 4. Different NUMA node to HFI Y and running an IRQ in hfi1_get_proc_affinity() [all …]
|