Home
last modified time | relevance | path

Searched +full:memory +full:- +full:to +full:- +full:memory (Results 1 – 25 of 1275) sorted by relevance

12345678910>>...51

/linux-5.10/Documentation/admin-guide/mm/
Dmemory-hotplug.rst4 Memory Hotplug
10 This document is about memory hotplug including how-to-use and current status.
11 Because Memory Hotplug is still under development, contents of this text will
18 (1) x86_64's has special implementation for memory hotplug.
26 Purpose of memory hotplug
27 -------------------------
29 Memory Hotplug allows users to increase/decrease the amount of memory.
32 (A) For changing the amount of memory.
33 This is to allow a feature like capacity on demand.
34 (B) For installing/removing DIMMs or NUMA-nodes physically.
[all …]
Dconcepts.rst7 The memory management in Linux is a complex system that evolved over the
8 years and included more and more functionality to support a variety of
9 systems from MMU-less microcontrollers to supercomputers. The memory
14 address to a physical address.
18 Virtual Memory Primer
21 The physical memory in a computer system is a limited resource and
22 even for systems that support memory hotplug there is a hard limit on
23 the amount of memory that can be installed. The physical memory is not
29 All this makes dealing directly with physical memory quite complex and
30 to avoid this complexity a concept of virtual memory was developed.
[all …]
Dnumaperf.rst7 Some platforms may have multiple types of memory attached to a compute
8 node. These disparate memory ranges may share some characteristics, such
12 A system supports such heterogeneous memory by grouping each memory type
14 characteristics. Some memory may share the same node as a CPU, and others
15 are provided as memory only nodes. While memory only nodes do not provide
16 CPUs, they may still be local to one or more compute nodes relative to
18 nodes with local memory and a memory only node for each of compute node::
20 +------------------+ +------------------+
21 | Compute Node 0 +-----+ Compute Node 1 |
23 +--------+---------+ +--------+---------+
[all …]
Dnuma_memory_policy.rst4 NUMA Memory Policy
7 What is NUMA Memory Policy?
10 In the Linux kernel, "memory policy" determines from which node the kernel will
11 allocate memory in a NUMA system or in an emulated NUMA system. Linux has
12 supported platforms with Non-Uniform Memory Access architectures since 2.4.?.
13 The current memory policy support was added to Linux 2.6 around May 2004. This
14 document attempts to describe the concepts and APIs of the 2.6 memory policy
17 Memory policies should not be confused with cpusets
18 (``Documentation/admin-guide/cgroup-v1/cpusets.rst``)
20 memory may be allocated by a set of processes. Memory policies are a
[all …]
/linux-5.10/Documentation/admin-guide/cgroup-v1/
Dmemory.rst2 Memory Resource Controller
8 here but make sure to check the current code if you need a deeper
12 The Memory Resource Controller has generically been referred to as the
13 memory controller in this document. Do not confuse memory controller
14 used here with the memory controller that is used in hardware.
17 When we mention a cgroup (cgroupfs's directory) with memory controller,
18 we call it "memory cgroup". When you see git-log and source code, you'll
19 see patch's title and function names tend to use "memcg".
22 Benefits and Purpose of the memory controller
25 The memory controller isolates the memory behaviour of a group of tasks
[all …]
/linux-5.10/Documentation/vm/
Dmemory-model.rst1 .. SPDX-License-Identifier: GPL-2.0
6 Physical Memory Model
9 Physical memory in a system may be addressed in different ways. The
10 simplest case is when the physical memory starts at address 0 and
11 spans a contiguous range up to the maximal address. It could be,
15 different memory banks are attached to different CPUs.
17 Linux abstracts this diversity using one of the three memory models:
19 memory models it supports, what the default memory model is and
20 whether it is possible to manually override that default.
26 All the memory models track the status of physical page frames using
[all …]
Dhmm.rst4 Heterogeneous Memory Management (HMM)
7 Provide infrastructure and helpers to integrate non-conventional memory (device
8 memory like GPU on board memory) into regular kernel path, with the cornerstone
9 of this being specialized struct page for such memory (see sections 5 to 7 of
12 HMM also provides optional helpers for SVM (Share Virtual Memory), i.e.,
13 allowing a device to transparently access program addresses coherently with
15 for the device. This is becoming mandatory to simplify the use of advanced
16 heterogeneous computing where GPU, DSP, or FPGA are used to perform various
20 related to using device specific memory allocators. In the second section, I
21 expose the hardware limitations that are inherent to many platforms. The third
[all …]
Dnuma.rst14 or more CPUs, local memory, and/or IO buses. For brevity and to
19 Each of the 'cells' may be viewed as an SMP [symmetric multi-processor] subset
20 of the system--although some components necessary for a stand-alone SMP system
22 connected together with some sort of system interconnect--e.g., a crossbar or
23 point-to-point link are common types of NUMA system interconnects. Both of
24 these types of interconnects can be aggregated to create NUMA platforms with
28 Coherent NUMA or ccNUMA systems. With ccNUMA systems, all memory is visible
29 to and accessible from any CPU attached to any cell and cache coherency
32 Memory access time and effective memory bandwidth varies depending on how far
33 away the cell containing the CPU or IO bus making the memory access is from the
[all …]
/linux-5.10/mm/
DKconfig1 # SPDX-License-Identifier: GPL-2.0-only
3 menu "Memory Management options"
10 prompt "Memory model"
16 This option allows you to change some of the ways that
17 Linux manages its memory internally. Most users will
22 bool "Flat Memory"
25 This option is best suited for non-NUMA systems with
31 spaces and for features like NUMA and memory hotplug,
32 choose "Sparse Memory".
34 If unsure, choose this option (Flat Memory) over any other.
[all …]
/linux-5.10/Documentation/core-api/
Dmemory-hotplug.rst4 Memory hotplug
7 Memory hotplug event notifier
10 Hotplugging events are sent to a notification queue.
12 There are six types of notification defined in ``include/linux/memory.h``:
15 Generated before new memory becomes available in order to be able to
16 prepare subsystems to handle memory. The page allocator is still unable
17 to allocate from the new memory.
23 Generated when memory has successfully brought online. The callback may
24 allocate pages from the new memory.
27 Generated to begin the process of offlining memory. Allocations are no
[all …]
Dmemory-allocation.rst4 Memory Allocation Guide
7 Linux provides a variety of APIs for memory allocation. You can
11 `alloc_pages`. It is also possible to use more specialized allocators,
14 Most of the memory allocation APIs use GFP flags to express how that
15 memory should be allocated. The GFP acronym stands for "get free
16 pages", the underlying memory allocation function.
19 makes the question "How should I allocate memory?" not that easy to
32 The GFP flags control the allocators behavior. They tell what memory
33 zones can be used, how hard the allocator should try to find free
34 memory, whether the memory can be accessed by the userspace etc. The
[all …]
Dbus-virt-phys-mapping.rst2 How to access I/O mapped memory from within device drivers
11 (see :doc:`/core-api/dma-api-howto`). They continue
12 to be documented below for historical purposes, but new code
13 must not use them. --davidm 00/12/12
17 [ This is a mail message in response to a query on IO mapping, thus the
20 The AHA-1542 is a bus-master device, and your patch makes the driver give the
22 (because all bus master devices see the physical memory mappings directly).
25 at memory addresses, and in this case we actually want the third, the
26 so-called "bus address".
28 Essentially, the three ways of addressing memory are (this is "real memory",
[all …]
/linux-5.10/Documentation/powerpc/
Dfirmware-assisted-dump.rst2 Firmware-Assisted Dump
7 The goal of firmware-assisted dump is to enable the dump of
8 a crashed system, and to do so from a fully-reset system, and
9 to minimize the total elapsed time until the system is back
12 - Firmware-Assisted Dump (FADump) infrastructure is intended to replace
14 - Fadump uses the same firmware interfaces and memory reservation model
16 - Unlike phyp dump, FADump exports the memory dump through /proc/vmcore
19 - Unlike phyp dump, userspace tool does not need to refer any sysfs
21 - Unlike phyp dump, FADump allows user to release all the memory reserved
23 - Once enabled through kernel boot parameter, FADump can be
[all …]
/linux-5.10/Documentation/ABI/testing/
Dsysfs-devices-memory1 What: /sys/devices/system/memory
5 The /sys/devices/system/memory contains a snapshot of the
6 internal state of the kernel memory blocks. Files could be
7 added or removed dynamically to represent hot-add/remove
9 Users: hotplug memory add/remove tools
10 http://www.ibm.com/developerworks/wikis/display/LinuxP/powerpc-utils
12 What: /sys/devices/system/memory/memoryX/removable
16 The file /sys/devices/system/memory/memoryX/removable
17 indicates whether this memory block is removable or not.
18 This is useful for a user-level agent to determine
[all …]
/linux-5.10/Documentation/devicetree/bindings/reserved-memory/
Dreserved-memory.txt1 *** Reserved memory regions ***
3 Reserved memory is specified as a node under the /reserved-memory node.
4 The operating system shall exclude reserved memory from normal usage
6 normal use) memory regions. Such memory regions are usually designed for
9 Parameters for each memory region can be encoded into the device tree
12 /reserved-memory node
13 ---------------------
14 #address-cells, #size-cells (required) - standard definition
15 - Should use the same values as the root node
16 ranges (required) - standard definition
[all …]
/linux-5.10/drivers/xen/
DKconfig1 # SPDX-License-Identifier: GPL-2.0-only
6 bool "Xen memory balloon driver"
9 The balloon driver allows the Xen domain to request more memory from
10 the system to expand the domain's memory allocation, or alternatively
11 return unneeded memory to the system.
14 bool "Memory hotplug support for Xen balloon driver"
18 Memory hotplug support for Xen balloon driver allows expanding memory
23 It's also very useful for non PV domains to obtain unpopulated physical
24 memory ranges to use in order to map foreign memory or grants.
26 Memory could be hotplugged in following steps:
[all …]
/linux-5.10/include/linux/
Dtee_drv.h1 /* SPDX-License-Identifier: GPL-2.0-only */
3 * Copyright (c) 2015-2016, Linaro Limited
19 * The file describes the API provided by the generic TEE driver to the
23 #define TEE_SHM_MAPPED BIT(0) /* Memory mapped by the kernel */
24 #define TEE_SHM_DMA_BUF BIT(1) /* Memory with dma-buf handle */
25 #define TEE_SHM_EXT_DMA_BUF BIT(2) /* Memory with dma-buf handle */
26 #define TEE_SHM_REGISTER BIT(3) /* Memory registered in secure world */
27 #define TEE_SHM_USER_MAPPED BIT(4) /* Memory mapped in user space */
28 #define TEE_SHM_POOL BIT(5) /* Memory allocated from pool */
29 #define TEE_SHM_KERNEL_MAPPED BIT(6) /* Memory mapped in kernel space */
[all …]
/linux-5.10/Documentation/userspace-api/media/v4l/
Ddev-mem2mem.rst1 .. SPDX-License-Identifier: GFDL-1.1-no-invariants-or-later
6 Video Memory-To-Memory Interface
9 A V4L2 memory-to-memory device can compress, decompress, transform, or
10 otherwise convert video data from one format into another format, in memory.
11 Such memory-to-memory devices set the ``V4L2_CAP_VIDEO_M2M`` or
12 ``V4L2_CAP_VIDEO_M2M_MPLANE`` capability. Examples of memory-to-memory
14 converting from YUV to RGB).
16 A memory-to-memory video node acts just like a normal video node, but it
17 supports both output (sending frames from memory to the hardware)
19 memory) stream I/O. An application will have to setup the stream I/O for
[all …]
/linux-5.10/Documentation/dev-tools/
Dkmemleak.rst1 Kernel Memory Leak Detector
4 Kmemleak provides a way of detecting possible kernel memory leaks in a
5 way similar to a `tracing garbage collector
9 Valgrind tool (``memcheck --leak-check``) to detect the memory leaks in
10 user-space applications.
13 -----
15 CONFIG_DEBUG_KMEMLEAK in "Kernel hacking" has to be enabled. A kernel
16 thread scans the memory every 10 minutes (by default) and prints the
20 # mount -t debugfs nodev /sys/kernel/debug/
22 To display the details of all the possible scanned memory leaks::
[all …]
Dkasan.rst5 --------
7 KernelAddressSANitizer (KASAN) is a dynamic memory error detector designed to
8 find out-of-bound and use-after-free bugs. KASAN has two modes: generic KASAN
9 (similar to userspace ASan) and software tag-based KASAN (similar to userspace
12 KASAN uses compile-time instrumentation to insert validity checks before every
13 memory access, and therefore requires a compiler version that supports that.
17 out-of-bounds accesses for global variables is only supported since Clang 11.
19 Tag-based KASAN is only supported in Clang.
22 riscv architectures, and tag-based KASAN is supported only for arm64.
25 -----
[all …]
/linux-5.10/Documentation/driver-api/
Dntb.rst5 NTB (Non-Transparent Bridge) is a type of PCI-Express bridge chip that connects
6 the separate memory systems of two or more computers to the same PCI-Express
8 registers and memory translation windows, as well as non common features like
9 scratchpad and message registers. Scratchpad registers are read-and-writable
13 special status bits to make sure the information isn't rewritten by another
14 peer. Doorbell registers provide a way for peers to send interrupt events.
15 Memory windows allow translated read and write access to the peer memory.
21 clients interested in NTB features to discover NTB the devices supported by
22 hardware drivers. The term "client" is used here to mean an upper layer
24 is used here to mean a driver for a specific vendor and model of NTB hardware.
[all …]
Dedac.rst5 ----------------------------------------
7 There are several things to be aware of that aren't at all obvious, like
8 *sockets, *socket sets*, *banks*, *rows*, *chip-select rows*, *channels*,
16 * Memory devices
18 The individual DRAM chips on a memory stick. These devices commonly
20 provides the number of bits that the memory controller expects:
21 typically 72 bits, in order to provide 64 bits + 8 bits of ECC data.
23 * Memory Stick
25 A printed circuit board that aggregates multiple memory devices in
28 called DIMM (Dual Inline Memory Module).
[all …]
/linux-5.10/drivers/gpu/drm/nouveau/nvkm/core/
Dmemory.c4 * Permission is hereby granted, free of charge, to any person obtaining a
6 * to deal in the Software without restriction, including without limitation
7 * the rights to use, copy, modify, merge, publish, distribute, sublicense,
8 * and/or sell copies of the Software, and to permit persons to whom the
9 * Software is furnished to do so, subject to the following conditions:
15 * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
24 #include <core/memory.h>
30 nvkm_memory_tags_put(struct nvkm_memory *memory, struct nvkm_device *device, in nvkm_memory_tags_put() argument
33 struct nvkm_fb *fb = device->fb; in nvkm_memory_tags_put()
36 mutex_lock(&fb->subdev.mutex); in nvkm_memory_tags_put()
[all …]
/linux-5.10/arch/powerpc/kexec/
Dfile_load_64.c1 // SPDX-License-Identifier: GPL-2.0-only
3 * ppc64 code to implement the kexec_file_load syscall
12 * Based on kexec-tools' kexec-ppc64.c, kexec-elf-rel-ppc64.c, fs2dt.c.
26 #include <asm/crashdump-ppc64.h>
29 u64 *buf; /* data buffer for usable-memory property */
34 /* usable memory ranges to look up */
45 * get_exclude_memory_ranges - Get exclude memory ranges. This list includes
46 * regions like opal/rtas, tce-table, initrd,
49 * @mem_ranges: Range list to add the memory ranges to.
85 /* exclude memory ranges should be sorted for easy lookup */ in get_exclude_memory_ranges()
[all …]
/linux-5.10/include/uapi/linux/
Dnitro_enclaves.h1 /* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
16 * NE_CREATE_VM - The command is used to create a slot that is associated with
20 * setting any resources, such as memory and vCPUs, for an
21 * enclave. Memory and vCPUs are set for the slot mapped to an enclave.
22 * A NE CPU pool has to be set before calling this function. The
25 * Its format is the detailed in the cpu-lists section:
26 * https://www.kernel.org/doc/html/latest/admin-guide/kernel-parameters.html
27 * CPU 0 and its siblings have to remain available for the
29 * CPU core(s), from the same NUMA node, need(s) to be included
34 * * Enclave file descriptor - Enclave file descriptor used with
[all …]

12345678910>>...51