/linux/Documentation/admin-guide/mm/ |
H A D | memory-hotplug.rst | 2 Memory Hot(Un)Plug 5 This document describes generic Linux support for memory hot(un)plug with 13 Memory hot(un)plug allows for increasing and decreasing the size of physical 14 memory available to a machine at runtime. In the simplest case, it consists of 18 Memory hot(un)plug is used for various purposes: 20 - The physical memory available to a machine can be adjusted at runtime, up- or 21 downgrading the memory capacity. This dynamic memory resizing, sometimes 26 example is replacing failing memory modules. 28 - Reducing energy consumption either by physically unplugging memory module [all...] |
H A D | concepts.rst | 5 The memory management in Linux is a complex system that evolved over the 7 systems from MMU-less microcontrollers to supercomputers. The memory 16 Virtual Memory Primer 19 The physical memory in a computer system is a limited resource and 20 even for systems that support memory hotplug there is a hard limit on 21 the amount of memory that can be installed. The physical memory is not 27 All this makes dealing directly with physical memory quite complex and 28 to avoid this complexity a concept of virtual memory was developed. 30 The virtual memory abstract [all...] |
H A D | numaperf.rst | 2 NUMA Memory Performance 8 Some platforms may have multiple types of memory attached to a compute 9 node. These disparate memory ranges may share some characteristics, such 13 A system supports such heterogeneous memory by grouping each memory type 15 characteristics. Some memory may share the same node as a CPU, and others 16 are provided as memory only nodes. While memory only nodes do not provide 19 nodes with local memory and a memory onl [all...] |
/linux/tools/testing/selftests/memory-hotplug/ |
H A D | mem-on-off-test.sh | 25 if ! ls $SYSFS/devices/system/memory/memory* > /dev/null 2>&1; then 26 echo $msg memory hotplug is not supported >&2 30 if ! grep -q 1 $SYSFS/devices/system/memory/memory*/removable; then 31 echo $msg no hot-pluggable memory >&2 37 # list all hot-pluggable memory 43 for memory in $SYSFS/devices/system/memory/memory*; d [all...] |
/linux/Documentation/admin-guide/cgroup-v1/ |
H A D | memory.rst | 2 Memory Resource Controller 12 The Memory Resource Controller has generically been referred to as the 13 memory controller in this document. Do not confuse memory controller 14 used here with the memory controller that is used in hardware. 17 When we mention a cgroup (cgroupfs's directory) with memory controller, 18 we call it "memory cgroup". When you see git-log and source code, you'll 22 Benefits and Purpose of the memory controller 25 The memory controller isolates the memory behaviou [all...] |
/linux/Documentation/ABI/testing/ |
H A D | sysfs-devices-memory | 1 What: /sys/devices/system/memory 5 The /sys/devices/system/memory contains a snapshot of the 6 internal state of the kernel memory blocks. Files could be 9 Users: hotplug memory add/remove tools 12 What: /sys/devices/system/memory/memoryX/removable 16 The file /sys/devices/system/memory/memoryX/removable is a 17 legacy interface used to indicated whether a memory block is 19 "1" if and only if the kernel supports memory offlining. 20 Users: hotplug memory remove tools 24 What: /sys/devices/system/memory/memory [all...] |
H A D | sysfs-edac-memory-repair | 7 pertains to the memory media repair features control, such as 8 PPR (Post Package Repair), memory sparing etc, where <dev-name> 10 device driver for the memory repair features. 12 Post Package Repair is a maintenance operation requests the memory 13 device to perform a repair operation on its media. It is a memory 14 self-healing feature that fixes a failing memory location by 16 CXL memory device with DRAM components that support PPR features may 28 decoders have been configured), memory devices (e.g. CXL) 30 physical address map. As such, the memory to repair must be 40 (RO) Memory repai [all...] |
/linux/Documentation/mm/ |
H A D | memory-model.rst | 4 Physical Memory Model 7 Physical memory in a system may be addressed in different ways. The 8 simplest case is when the physical memory starts at address 0 and 13 different memory banks are attached to different CPUs. 15 Linux abstracts this diversity using one of the two memory models: 17 memory models it supports, what the default memory model is and 20 All the memory models track the status of physical page frames using 23 Regardless of the selected memory model, there exists one-to-one 27 Each memory mode [all...] |
H A D | hmm.rst | 2 Heterogeneous Memory Management (HMM) 5 Provide infrastructure and helpers to integrate non-conventional memory (device 6 memory like GPU on board memory) into regular kernel path, with the cornerstone 7 of this being specialized struct page for such memory (see sections 5 to 7 of 10 HMM also provides optional helpers for SVM (Share Virtual Memory), i.e., 18 related to using device specific memory allocators. In the second section, I 22 fifth section deals with how device memory is represented inside the kernel. 28 Problems of using a device specific memory allocator 31 Devices with a large amount of on board memory (severa [all...] |
H A D | numa.rst | 12 or more CPUs, local memory, and/or IO buses. For brevity and to 26 Coherent NUMA or ccNUMA systems. With ccNUMA systems, all memory is visible 30 Memory access time and effective memory bandwidth varies depending on how far 31 away the cell containing the CPU or IO bus making the memory access is from the 32 cell containing the target memory. For example, access to memory by CPUs 34 bandwidths than accesses to memory on other, remote cells. NUMA platforms 39 memory bandwidth. However, to achieve scalable memory bandwidt [all...] |
/linux/Documentation/devicetree/bindings/memory-controllers/fsl/ |
H A D | fsl,ddr.yaml | 4 $id: http://devicetree.org/schemas/memory-controllers/fsl/fsl,ddr.yaml# 7 title: Freescale DDR memory controller 15 pattern: "^memory-controller@[0-9a-f]+$" 21 - fsl,qoriq-memory-controller-v4.4 22 - fsl,qoriq-memory-controller-v4.5 23 - fsl,qoriq-memory-controller-v4.7 24 - fsl,qoriq-memory-controller-v5.0 25 - const: fsl,qoriq-memory-controller 27 - fsl,bsc9132-memory-controller 28 - fsl,mpc8536-memory [all...] |
/linux/Documentation/edac/ |
H A D | memory_repair.rst | 4 EDAC Memory Repair Control 20 Some memory devices support repair operations to address issues in their 21 memory media. Post Package Repair (PPR) and memory sparing are examples of 27 Post Package Repair is a maintenance operation which requests the memory 28 device to perform repair operation on its media. It is a memory self-healing 29 feature that fixes a failing memory location by replacing it with a spare row 32 For example, a CXL memory device with DRAM components that support PPR 42 The data may not be retained and memory requests may not be correctly 46 For example, for CXL memory device [all...] |
/linux/drivers/cxl/ |
H A D | Kconfig | 16 memory targets, the CXL.io protocol is equivalent to PCI Express. 26 The CXL specification defines a "CXL memory device" sub-class in the 27 PCI "memory controller" base class of devices. Device's identified by 29 memory to be mapped into the system address map (Host-managed Device 30 Memory (HDM)). 32 Say 'y/m' to enable a driver that will attach to CXL memory expander 33 devices enumerated by the memory device class code for configuration 40 bool "RAW Command Interface for Memory Devices" 53 potential impact to memory currently in use by the kernel. 66 Enable support for host managed device memory (HD [all...] |
/linux/Documentation/arch/arm64/ |
H A D | kdump.rst | 2 crashkernel memory reservation on arm64 9 reserved memory is needed to pre-load the kdump kernel and boot such 12 That reserved memory for kdump is adapted to be able to minimally 19 Through the kernel parameters below, memory can be reserved accordingly 21 large chunk of memomy can be found. The low memory reservation needs to 22 be considered if the crashkernel is reserved from the high memory area. 28 Low memory and high memory 31 For kdump reservations, low memory is the memory are [all...] |
/linux/Documentation/core-api/ |
H A D | memory-hotplug.rst | 4 Memory hotplug 7 Memory hotplug event notifier 12 Memory notifier 15 There are six types of notification defined in ``include/linux/memory.h``: 18 Generated before new memory becomes available in order to be able to 19 prepare subsystems to handle memory. The page allocator is still unable 20 to allocate from the new memory. 26 Generated when memory has successfully brought online. The callback may 27 allocate pages from the new memory. 30 Generated to begin the process of offlining memory [all...] |
/linux/tools/testing/selftests/cgroup/ |
H A D | test_memcontrol.c | 107 * the memory controller. 115 /* Create two nested cgroups with the memory controller enabled */ in test_memcg_subtree_control() 124 if (cg_write(parent, "cgroup.subtree_control", "+memory")) in test_memcg_subtree_control() 130 if (cg_read_strstr(child, "cgroup.controllers", "memory")) in test_memcg_subtree_control() 133 /* Create two nested cgroups without enabling memory controller */ in test_memcg_subtree_control() 148 if (!cg_read_strstr(child2, "cgroup.controllers", "memory")) in test_memcg_subtree_control() 187 current = cg_read_long(cgroup, "memory.current"); in alloc_anon_50M_check() 194 anon = cg_read_key_long(cgroup, "memory.stat", "anon "); in alloc_anon_50M_check() 221 current = cg_read_long(cgroup, "memory.current"); in alloc_pagecache_50M_check() 225 file = cg_read_key_long(cgroup, "memory in alloc_pagecache_50M_check() [all...] |
/linux/tools/perf/pmu-events/arch/arm64/arm/neoverse-v3/ |
H A D | memory.json | 4 "PublicDescription": "Counts memory accesses issued by the CPU load store unit, where those accesses are issued due to load or store operations. This event counts memory accesses no matter whether the data is received from any level of cache hierarchy or external memory. If memory accesses are broken up into smaller transactions than what were specified in the load or store instructions, then the event counts those smaller memory transactions." 8 "PublicDescription": "Counts any detected correctable or uncorrectable physical memory errors (ECC or parity) in protected CPUs RAMs. On the core, this event counts errors in the caches (including data and tag rams). Any detected memory error (from either a speculative and abandoned access, or an architecturally executed access) is counted. Note that errors are only detected when the actual protected memory is accessed by an operation." 16 "PublicDescription": "Counts memory accesses issued by the CPU due to load operations. The event counts any memory loa [all...] |
/linux/drivers/base/ |
H A D | memory.c | 3 * Memory subsystem support 9 * a SPARSEMEM-memory-model system's physical memory in /sysfs. 19 #include <linux/memory.h> 30 #define MEMORY_CLASS_NAME "memory" 66 * Memory blocks are cached in a local radix tree to avoid 73 * Memory groups, indexed by memory group id (mgid). 106 * memory_block_advise_max_size() - advise memory hotplug on the max suggested 157 /* Show the memory bloc 713 __add_memory_block(struct memory_block * memory) __add_memory_block() argument 864 remove_memory_block(struct memory_block * memory) remove_memory_block() argument [all...] |
/linux/Documentation/driver-api/cxl/platform/ |
H A D | bios-and-efi.rst | 19 * BIOS/EFI create the system memory map (EFI Memory Map, E820, etc) 24 static memory map configuration. More detail on these tables can be found 29 on physical memory region size and alignment, memory holes, HDM interleave, 39 When this is enabled, this bit tells linux to defer management of a memory 40 region to a driver (in this case, the CXL driver). Otherwise, the memory is 41 treated as "normal memory", and is exposed to the page allocator during 60 Memory Attribute` field. This may be called something else on your platform. 62 :code:`uefisettings get "CXL Memory Attribut [all...] |
/linux/tools/perf/pmu-events/arch/arm64/arm/neoverse-n3/ |
H A D | memory.json | 4 "PublicDescription": "Counts memory accesses issued by the CPU load store unit, where those accesses are issued due to load or store operations. This event counts memory accesses no matter whether the data is received from any level of cache hierarchy or external memory. If memory accesses are broken up into smaller transactions than what were specified in the load or store instructions, then the event counts those smaller memory transactions." 12 "PublicDescription": "Counts memory accesses issued by the CPU due to load operations. The event counts any memory load access, no matter whether the data is received from any level of cache hierarchy or external memory. The event also counts atomic load operations. If memory accesses are broken up by the load/store unit into smaller transactions that are issued by the bus interface, then the event counts those smaller transactions." 16 "PublicDescription": "Counts memory accesse [all...] |
/linux/Documentation/driver-api/cxl/linux/ |
H A D | memory-hotplug.rst | 4 Memory Hotplug 6 The final phase of surfacing CXL memory to the kernel page allocator is for 7 the `DAX` driver to surface a `Driver Managed` memory region via the 8 memory-hotplug component. 13 2) Hotplug Memory Block size 14 3) Memory Map Resource location 15 4) Driver-Managed Memory Designation 19 The default-online behavior of hotplug memory is dictated by the following, 24 - :code:`/sys/devices/system/memory/auto_online_blocks` value 26 These dictate whether hotplugged memory block [all...] |
/linux/drivers/gpu/drm/nouveau/nvkm/core/ |
H A D | memory.c | 24 #include <core/memory.h> 30 nvkm_memory_tags_put(struct nvkm_memory *memory, struct nvkm_device *device, in nvkm_memory_tags_put() argument 39 kfree(memory->tags); in nvkm_memory_tags_put() 40 memory->tags = NULL; in nvkm_memory_tags_put() 48 nvkm_memory_tags_get(struct nvkm_memory *memory, struct nvkm_device *device, in nvkm_memory_tags_get() argument 56 if ((tags = memory->tags)) { in nvkm_memory_tags_get() 57 /* If comptags exist for the memory, but a different amount in nvkm_memory_tags_get() 84 * As memory can be mapped in multiple places, we still in nvkm_memory_tags_get() 94 *ptags = memory->tags = tags; in nvkm_memory_tags_get() 101 struct nvkm_memory *memory) in nvkm_memory_ctor() argument 110 struct nvkm_memory *memory = container_of(kref, typeof(*memory), kref); nvkm_memory_del() local 121 struct nvkm_memory *memory = *pmemory; nvkm_memory_unref() local 129 nvkm_memory_ref(struct nvkm_memory * memory) nvkm_memory_ref() argument 142 struct nvkm_memory *memory; nvkm_memory_new() local [all...] |
/linux/Documentation/admin-guide/mm/damon/ |
H A D | reclaim.rst | 8 be used for proactive and lightweight reclamation under light memory pressure. 10 to be selectively used for different level of memory pressure and requirements. 15 On general memory over-committed systems, proactively reclaiming cold pages 16 helps saving memory and reducing latency spikes that incurred by the direct 20 Free Pages Reporting [3]_ based memory over-commit virtualization systems are 22 memory to host, and the host reallocates the reported memory to other guests. 23 As a result, the memory of the systems are fully utilized. However, the 24 guests could be not so memory-frugal, mainly because some kernel subsystems and 25 user-space applications are designed to use as much memory a [all...] |
/linux/Documentation/arch/powerpc/ |
H A D | firmware-assisted-dump.rst | 14 - Fadump uses the same firmware interfaces and memory reservation model 16 - Unlike phyp dump, FADump exports the memory dump through /proc/vmcore 21 - Unlike phyp dump, FADump allows user to release all the memory reserved 35 - Once the dump is copied out, the memory that held the dump 44 - The first kernel registers the sections of memory with the 46 These registered sections of memory are reserved by the first 50 low memory regions (boot memory) from source to destination area. 54 The term 'boot memory' means size of the low memory chun [all...] |
/linux/Documentation/driver-api/cxl/devices/ |
H A D | device-types.rst | 7 The type of CXL device (Memory, Accelerator, etc) dictates many configuration steps. This section 23 other than memory (CXL.mem) or cache (CXL.cache) operations. 31 The mechanism by which a device may coherently access and cache host memory. 37 The mechanism by which the CPU may coherently access and cache device memory. 53 * Does NOT have host-managed device memory (HDM) 56 directly operate on host-memory (DMA) to store incoming packets. These 57 devices largely rely on CPU-attached memory. 65 * Optionally implements coherent cache and Host-Managed Device Memory 66 * Is typically an accelerator device with high bandwidth memory. 69 of host-managed device memory, whic [all...] |