Lines Matching +full:memory +full:- +full:to +full:- +full:memory

2 NUMA Memory Policy
5 What is NUMA Memory Policy?
8 In the Linux kernel, "memory policy" determines from which node the kernel will
9 allocate memory in a NUMA system or in an emulated NUMA system. Linux has
10 supported platforms with Non-Uniform Memory Access architectures since 2.4.?.
11 The current memory policy support was added to Linux 2.6 around May 2004. This
12 document attempts to describe the concepts and APIs of the 2.6 memory policy
15 Memory policies should not be confused with cpusets
16 (``Documentation/admin-guide/cgroup-v1/cpusets.rst``)
18 memory may be allocated by a set of processes. Memory policies are a
19 programming interface that a NUMA-aware application can take advantage of. When
20 both cpusets and policies are applied to a task, the restrictions of the cpuset
21 takes priority. See :ref:`Memory Policies and cpusets <mem_pol_and_cpusets>`
24 Memory Policy Concepts
27 Scope of Memory Policies
28 ------------------------
30 The Linux kernel supports _scopes_ of memory policy, described here from
31 most general to most specific:
39 up, the system default policy will be set to interleave
40 allocations across all nodes with "sufficient" memory, so as
41 not to overload the initial boot node with boot-time
45 this is an optional, per-task policy. When defined for a
50 task policy "fall back" to the System Default Policy.
52 The task policy applies to the entire address space of a task. Thus,
55 to establish the task policy for a child task exec()'d from an
56 executable image that has no awareness of memory policy. See the
57 :ref:`Memory Policy APIs <memory_policy_apis>` section,
59 that a task may use to set/change its task/process policy.
61 In a multi-threaded task, task policies apply only to the thread
67 A task policy applies only to pages allocated after the policy is
75 A "VMA" or "Virtual Memory Area" refers to a range of a task's
78 :ref:`Memory Policy APIs <memory_policy_apis>` section,
79 below, for an overview of the mbind() system call used to set a VMA
85 back to the task policy, which may itself fall back to the
90 * VMA policy applies ONLY to anonymous pages. These include
94 applied to a file mapping, it will be ignored if the mapping
97 an anonymous page is allocated on an attempt to write to the
98 mapping-- i.e., at Copy-On-Write.
101 virtual address space--a.k.a. threads--independent of when
103 fork(). However, because VMA policies refer to a specific
106 are NOT inheritable across exec(). Thus, only NUMA-aware
109 * A task may install a new VMA policy on a sub-range of a
111 the existing virtual memory area into 2 or 3 VMAs, each with
114 * By default, VMA policy applies only to pages allocated after
119 call, so that page contents can be moved to match a newly
123 Conceptually, shared policies apply to "memory objects" mapped
126 policies--using the mbind() system call specifying a range of
128 VMA policies, which can be considered to be an attribute of a
130 directly to the shared object. Thus, all tasks that attach to
134 As of 2.6.22, only shared memory segments, created by shmget() or
136 policy support was added to Linux, the associated data structures were
137 added to hugetlbfs shmem segments. At the time, hugetlbfs did not
138 support allocation at fault time--a.k.a lazy allocation--so hugetlbfs
139 shmem segments were never "hooked up" to the shared policy support.
154 Thus, different tasks that attach to a shared memory segment can have
157 a shared memory region, when one task has installed shared policy on
160 Components of Memory Policies
161 -----------------------------
163 A NUMA memory policy consists of a "mode", optional mode flags, and
166 and the optional set of nodes can be viewed as the arguments to the
169 Internally, memory policies are implemented by a reference counted
171 discussed in context, below, as required to explain the behavior.
173 NUMA memory policy supports the following 4 behavioral modes:
175 Default Mode--MPOL_DEFAULT
176 This mode is only used in the memory policy APIs. Internally,
177 MPOL_DEFAULT is converted to the NULL memory policy in all
178 policy scopes. Any existing non-default policy will simply be
180 MPOL_DEFAULT means "fall back to the next most specific policy
183 For example, a NULL or default task policy will fall back to the
185 back to the task policy.
187 When specified in one of the memory policy APIs, the Default mode
190 It is an error for the set of nodes specified for this policy to
191 be non-empty.
194 This mode specifies that memory must come from the set of
195 nodes specified by the policy. Memory will be allocated from
196 the node in the set with sufficient free memory that is
197 closest to the node where the allocation takes place.
206 Internally, the Preferred policy uses a single node--the
214 It is possible for the user to specify that local allocation
226 For allocation of anonymous pages and shared memory pages,
230 nodes specified by the policy. It then attempts to allocate a
238 maintained per task. This counter wraps around to the lowest
240 This will tend to spread the pages out over the nodes
249 a memory pressure on all nodes in the nodemask, the allocation
250 can fall back to all existing numa nodes. This is effectively
258 Weighted interleave allocates pages on nodes according to a
262 NUMA memory policy supports the following optional mode flags:
267 nodes changes after the memory policy has been defined.
272 remapped to the new set of allowed nodes. This may result in nodes
275 With this flag, if the user-specified nodes overlap with the
276 nodes allowed by the task's cpuset, then the memory policy is
277 applied to their intersection. If the two sets of nodes do not
280 For example, consider a task that is attached to a cpuset with
281 mems 1-3 that sets an Interleave policy over the same set. If
282 the cpuset's mems change to 3-5, the Interleave will now occur
295 by the user will be mapped relative to the set of the task or VMA's
296 set of allowed nodes. The kernel stores the user-passed nodemask,
298 be remapped relative to the new set of allowed nodes.
303 remapped to the new set of allowed nodes. That remap may not
304 preserve the relative nature of the user's passed nodemask to its
306 1,3,5 may be remapped to 7-9 and then to 1-3 if the set of
307 allowed nodes is restored to its original state.
310 the user's passed nodemask are relative to the set of allowed
315 relative to task or VMA's set of allowed nodes.
319 the user's nodemask when the set of allowed nodes is only 0-3),
320 then the remap wraps around to the beginning of the nodemask and,
323 For example, consider a task that is attached to a cpuset with
324 mems 2-5 that sets an Interleave policy over the same set with
325 MPOL_F_RELATIVE_NODES. If the cpuset's mems change to 3-7, the
326 interleave now occurs over nodes 3,5-7. If the cpuset's mems
327 then change to 0,2-3,5, then the interleave occurs over nodes
328 0,2-3,5.
330 Thanks to the consistent remapping, applications preparing
331 nodemasks to specify memory policies using this flag should
332 disregard their current, actual cpuset imposed memory placement
334 memory nodes 0 to N-1, where N is the number of memory nodes the
335 policy is intended to manage. Let the kernel then remap to the
336 set of memory nodes allowed by the task's cpuset, as that may
344 Memory Policy Reference Counting
347 To resolve use/free races, struct mempolicy contains an atomic reference
350 the structure back to the mempolicy kmem cache when the reference count
351 goes to zero.
353 When a new memory policy is allocated, its reference count is initialized
354 to '1', representing the reference held by the task that is installing the
355 new policy. When a pointer to a memory policy structure is stored in another
359 During run-time "usage" of the policy, we attempt to minimize atomic operations
360 on the reference count, as this can lead to cache lines bouncing between cpus
367 2) examination of the policy to determine the policy mode and associated node
371 BIND policy nodemask is used, by reference, to filter ineligible nodes.
376 1) we never need to get/free the system default policy as this is never
379 2) for querying the policy, we do not need to take an extra reference on the
393 shared memory policy while another task, with a distinct mmap_lock, is
394 querying or allocating a page based on the policy. To resolve this
396 to the shared policy during lookup while holding a spin lock on the shared
400 used for non-shared policies. For this reason, shared policies are marked
401 as such, and the extra reference is dropped "conditionally"--i.e., only
406 more expensive to use in the page allocation path. This is especially
407 true for shared policies on shared memory regions shared by tasks running
409 falling back to task or system default policy for shared memory regions,
410 or by prefaulting the entire shared memory region into memory and locking
415 Memory Policy APIs
418 Linux supports 4 system calls for controlling memory policy. These APIS
429 Set [Task] Memory Policy::
434 Set's the calling task's "task/process memory policy" to mode
436 'nmask'. 'nmask' points to a bit mask of node ids containing at least
444 Get [Task] Memory Policy or Related Information::
450 Queries the "task/process memory policy" of the calling task, or the
479 closest to which page allocation will come from. Specifying the home node override
480 the default allocation policy to allocate memory close to the local node for an
484 Memory Policy Command Line Interface
487 Although not strictly part of the Linux implementation of memory policy,
488 a command line tool, numactl(8), exists that allows one to:
493 + set the shared policy for a shared memory segment via mbind(2)
495 The numactl(8) tool is packaged with the run-time version of the library
496 containing the memory policy system call wrappers. Some distributions
497 package the headers and compile-time libraries in a separate development
502 Memory Policies and cpusets
505 Memory policies work within cpusets as described above. For memory policies
506 that require a node or set of nodes, the nodes are restricted to the set of
510 specified for the policy and the set of nodes with memory is used. If the
515 The interaction of memory policies and cpusets can be problematic when tasks
516 in two cpusets share access to a memory region, such as shared memory segments
520 this information requires "stepping outside" the memory policy APIs to use the
522 be attaching to the shared region. Furthermore, if the cpusets' allowed
523 memory sets are disjoint, "local" allocation is the only valid policy.