Lines Matching full:the

8 In the Linux kernel, "memory policy" determines from which node the kernel will
11 The current memory policy support was added to Linux 2.6 around May 2004. This
12 document attempts to describe the concepts and APIs of the 2.6 memory policy
17 which is an administrative mechanism for restricting the nodes from which
20 both cpusets and policies are applied to a task, the restrictions of the cpuset
30 The Linux kernel supports _scopes_ of memory policy, described here from
34 this policy is "hard coded" into the kernel. It is the policy
36 one of the more specific policy scopes discussed below. When
37 the system is "up and running", the system default policy will
39 up, the system default policy will be set to interleave
41 not to overload the initial boot node with boot-time
47 by or on behalf of the task that aren't controlled by a more
49 all page allocations that would have been controlled by the
50 task policy "fall back" to the System Default Policy.
52 The task policy applies to the entire address space of a task. Thus,
54 [clone() w/o the CLONE_VM flag] and exec*(). This allows a parent task
55 to establish the task policy for a child task exec()'d from an
56 executable image that has no awareness of memory policy. See the
58 below, for an overview of the system call
61 In a multi-threaded task, task policies apply only to the thread
62 [Linux kernel task] that installs the policy and any threads
64 at the time a new task policy is installed retain their current
67 A task policy applies only to pages allocated after the policy is
68 installed. Any pages already faulted in by the task when the task
70 the policy at the time they were allocated.
77 of its virtual address space. See the
79 below, for an overview of the mbind() system call used to set a VMA
82 A VMA policy will govern the allocation of pages that back
83 this region of the address space. Any regions of the task's
85 back to the task policy, which may itself fall back to the
91 pages allocated for anonymous segments, such as the task
92 stack and heap, and any regions of the address space
93 mmap()ed with the MAP_ANONYMOUS flag. If a VMA policy is
94 applied to a file mapping, it will be ignored if the mapping
95 used the MAP_SHARED flag. If the file mapping used the
96 MAP_PRIVATE flag, the VMA policy will only be applied when
97 an anonymous page is allocated on an attempt to write to the
102 the policy is installed; and they are inherited across
104 region of a task's address space, and because the address
111 the existing virtual memory area into 2 or 3 VMAs, each with
115 the policy is installed. Any pages already faulted into the
116 VMA range remain where they were allocated based on the
117 policy at the time they were allocated. However, since
118 2.6.16, Linux supports page migration via the mbind() system
125 application installs shared policies the same way as VMA
126 policies--using the mbind() system call specifying a range of
127 virtual addresses that map the shared object. However, unlike
130 directly to the shared object. Thus, all tasks that attach to
131 the object share the policy, and all pages allocated for the
132 shared object, by any task, will obey the shared policy.
136 policy support was added to Linux, the associated data structures were
137 added to hugetlbfs shmem segments. At the time, hugetlbfs did not
139 shmem segments were never "hooked up" to the shared policy support.
145 with MAP_SHARED ignore any VMA policy installed on the virtual
146 address range backed by the shared file mapping. Rather,
148 mappings that have not yet been written by the task, follow
151 The shared policy infrastructure supports different policies on subset
152 ranges of the shared object. However, Linux still splits the VMA of
153 the task that installs the policy for each range of distinct policy.
156 can be seen by examining the /proc/<pid>/numa_maps of tasks sharing
158 one or more ranges of the region.
164 an optional set of nodes. The mode determines the behavior of the
165 policy, the optional mode flags determine the behavior of the mode,
166 and the optional set of nodes can be viewed as the arguments to the
171 discussed in context, below, as required to explain the behavior.
173 NUMA memory policy supports the following 4 behavioral modes:
176 This mode is only used in the memory policy APIs. Internally,
177 MPOL_DEFAULT is converted to the NULL memory policy in all
180 MPOL_DEFAULT means "fall back to the next most specific policy
183 For example, a NULL or default task policy will fall back to the
185 back to the task policy.
187 When specified in one of the memory policy APIs, the Default mode
188 does not use the optional set of nodes.
190 It is an error for the set of nodes specified for this policy to
194 This mode specifies that memory must come from the set of
195 nodes specified by the policy. Memory will be allocated from
196 the node in the set with sufficient free memory that is
197 closest to the node where the allocation takes place.
200 This mode specifies that the allocation should be attempted
201 from the single node specified in the policy. If that
202 allocation fails, the kernel will search other nodes, in order
203 of increasing distance from the preferred node based on
204 information provided by the platform firmware.
206 Internally, the Preferred policy uses a single node--the
207 preferred_node member of struct mempolicy. When the internal
208 mode flag MPOL_F_LOCAL is set, the preferred_node is ignored
209 and the policy is interpreted as local allocation. "Local"
211 starts at the node containing the cpu where the allocation
214 It is possible for the user to specify that local allocation
216 mode. If an empty nodemask is passed, the policy cannot use
217 the MPOL_F_STATIC_NODES or MPOL_F_RELATIVE_NODES flags
222 page granularity, across the nodes specified in the policy.
223 This mode also behaves slightly differently, based on the
227 Interleave mode indexes the set of nodes specified by the
228 policy using the page offset of the faulting address into the
229 segment [VMA] containing the address modulo the number of
230 nodes specified by the policy. It then attempts to allocate a
231 page, starting at the selected node, as if the node had been
233 local allocation. That is, allocation will follow the per
237 the set of nodes specified by the policy using a node counter
238 maintained per task. This counter wraps around to the lowest
239 specified node after it reaches the highest specified node.
240 This will tend to spread the pages out over the nodes
241 specified by the policy based on the order in which they are
243 address range or file. During system boot up, the temporary
247 This mode specifies that the allocation should be preferably
248 satisfied from the nodemask specified in the policy. If there is
249 a memory pressure on all nodes in the nodemask, the allocation
253 NUMA memory policy supports the following optional mode flags:
256 This flag specifies that the nodemask passed by
257 the user should not be remapped if the task or VMA's set of allowed
258 nodes changes after the memory policy has been defined.
261 change in the set of allowed nodes, the preferred nodemask (Preferred
263 remapped to the new set of allowed nodes. This may result in nodes
266 With this flag, if the user-specified nodes overlap with the
267 nodes allowed by the task's cpuset, then the memory policy is
268 applied to their intersection. If the two sets of nodes do not
269 overlap, the Default policy is used.
272 mems 1-3 that sets an Interleave policy over the same set. If
273 the cpuset's mems change to 3-5, the Interleave will now occur
275 3 is allowed from the user's nodemask, the "interleave" only
276 occurs over that node. If no nodes from the user's nodemask are
277 now allowed, the Default behavior is used.
279 MPOL_F_STATIC_NODES cannot be combined with the
285 This flag specifies that the nodemask passed
286 by the user will be mapped relative to the set of the task or VMA's
287 set of allowed nodes. The kernel stores the user-passed nodemask,
288 and if the allowed nodes changes, then that original nodemask will
289 be remapped relative to the new set of allowed nodes.
292 mempolicy is rebound because of a change in the set of allowed
293 nodes, the node (Preferred) or nodemask (Bind, Interleave) is
294 remapped to the new set of allowed nodes. That remap may not
295 preserve the relative nature of the user's passed nodemask to its
297 1,3,5 may be remapped to 7-9 and then to 1-3 if the set of
300 With this flag, the remap is done so that the node numbers from
301 the user's passed nodemask are relative to the set of allowed
302 nodes. In other words, if nodes 0, 2, and 4 are set in the user's
303 nodemask, the policy will be effected over the first (and in the
304 Bind or Interleave case, the third and fifth) nodes in the set of
305 allowed nodes. The nodemask passed by the user represents nodes
308 If the user's nodemask includes nodes that are outside the range
309 of the new set of allowed nodes (for example, node 5 is set in
310 the user's nodemask when the set of allowed nodes is only 0-3),
311 then the remap wraps around to the beginning of the nodemask and,
312 if not already set, sets the node in the mempolicy nodemask.
315 mems 2-5 that sets an Interleave policy over the same set with
316 MPOL_F_RELATIVE_NODES. If the cpuset's mems change to 3-7, the
317 interleave now occurs over nodes 3,5-7. If the cpuset's mems
318 then change to 0,2-3,5, then the interleave occurs over nodes
321 Thanks to the consistent remapping, applications preparing
324 and prepare the nodemask as if they were always located on
325 memory nodes 0 to N-1, where N is the number of memory nodes the
326 policy is intended to manage. Let the kernel then remap to the
327 set of memory nodes allowed by the task's cpuset, as that may
330 MPOL_F_RELATIVE_NODES cannot be combined with the
341 the structure back to the mempolicy kmem cache when the reference count
345 to '1', representing the reference held by the task that is installing the
347 structure, another reference is added, as the task's reference will be dropped
348 on completion of the policy installation.
350 During run-time "usage" of the policy, we attempt to minimize atomic operations
351 on the reference count, as this can lead to cache lines bouncing between cpus
352 and NUMA nodes. "Usage" here means one of the following:
354 1) querying of the policy, either by the task itself [using the get_mempolicy()
355 API discussed below] or by another task using the /proc/<pid>/numa_maps
358 2) examination of the policy to determine the policy mode and associated node
360 path". Note that for MPOL_BIND, the "usage" extends across the entire
361 allocation process, which may sleep during page reclamation, because the
364 We can avoid taking an extra reference during the usages listed above as
367 1) we never need to get/free the system default policy as this is never
368 changed nor freed, once the system is up and running.
370 2) for querying the policy, we do not need to take an extra reference on the
371 target task's task policy nor vma policies because we always acquire the
372 task's mm's mmap_lock for read during the query. The set_mempolicy() and
373 mbind() APIs [see below] always acquire the mmap_lock for write when
378 3) Page allocation usage of task or vma policy occurs in the fault path where
379 we hold them mmap_lock for read. Again, because replacing the task or vma
380 policy requires that the mmap_lock be held for write, the policy can't be
385 querying or allocating a page based on the policy. To resolve this
386 potential race, the shared policy infrastructure adds an extra reference
387 to the shared policy during lookup while holding a spin lock on the shared
389 reference when we're finished "using" the policy. We must drop the
390 extra reference on shared policies in the same query/allocation paths
392 as such, and the extra reference is dropped "conditionally"--i.e., only
397 more expensive to use in the page allocation path. This is especially
401 or by prefaulting the entire shared memory region into memory and locking
410 always affect only the calling task, the calling task's address space, or
411 some shared object mapped into the calling task's address space.
414 the headers that define these APIs and the parameter data types for
415 user space applications reside in a package that is not part of the
416 Linux kernel. The kernel system call interfaces, with the 'sys\_'
417 prefix, are defined in <linux/syscalls.h>; the mode and flag
425 Set's the calling task's "task/process memory policy" to mode
426 specified by the 'mode' argument and the set of nodes defined by
428 'maxnode' ids. Optional mode flags may be passed by combining the
429 'mode' argument with the flag (for example: MPOL_INTERLEAVE |
432 See the set_mempolicy(2) man page for more details
441 Queries the "task/process memory policy" of the calling task, or the
442 policy or location of a specified virtual address, depending on the
445 See the get_mempolicy(2) man page for more details
454 mbind() installs the policy specified by (mode, nmask, maxnodes) as a
455 VMA policy for the range of the calling task's address space specified
456 by the 'start' and 'len' arguments. Additional actions may be
457 requested via the 'flags' argument.
459 See the mbind(2) man page for more details.
467 sys_set_mempolicy_home_node set the home node for a VMA policy present in the
468 task's address range. The system call updates the home node only for the existing
469 mempolicy range. Other address ranges are ignored. A home node is the NUMA node
470 closest to which page allocation will come from. Specifying the home node override
471 the default allocation policy to allocate memory close to the local node for an
478 Although not strictly part of the Linux implementation of memory policy,
481 + set the task policy for a specified program via set_mempolicy(2), fork(2) and
484 + set the shared policy for a shared memory segment via mbind(2)
486 The numactl(8) tool is packaged with the run-time version of the library
487 containing the memory policy system call wrappers. Some distributions
488 package the headers and compile-time libraries in a separate development
497 that require a node or set of nodes, the nodes are restricted to the set of
498 nodes whose memories are allowed by the cpuset constraints. If the nodemask
499 specified for the policy contains nodes that are not allowed by the cpuset and
500 MPOL_F_RELATIVE_NODES is not used, the intersection of the set of nodes
501 specified for the policy and the set of nodes with memory is used. If the
502 result is the empty set, the policy is considered invalid and cannot be
503 installed. If MPOL_F_RELATIVE_NODES is used, the policy's nodes are mapped
504 onto and folded into the task's set of allowed nodes as previously described.
506 The interaction of memory policies and cpusets can be problematic when tasks
508 created by shmget() of mmap() with the MAP_ANONYMOUS and MAP_SHARED flags, and
509 any of the tasks install shared policy on the region, only nodes whose
510 memories are allowed in both cpusets may be used in the policies. Obtaining
511 this information requires "stepping outside" the memory policy APIs to use the
513 be attaching to the shared region. Furthermore, if the cpusets' allowed
514 memory sets are disjoint, "local" allocation is the only valid policy.