Lines Matching +full:memory +full:- +full:mapped

13 This document describes the Linux memory manager's "Unevictable LRU"
21 details - the "what does it do?" - by reading the code. One hopes that the
33 memory x86_64 systems.
35 To illustrate this with an example, a non-NUMA x86_64 platform with 128GB of
36 main memory will have over 32 million 4k pages in a single zone. When a large
47 * Those mapped into SHM_LOCK'd shared memory regions.
49 * Those mapped into VM_LOCKED [mlock()ed] VMAs.
56 -------------------------
58 The Unevictable LRU infrastructure consists of an additional, per-zone, LRU list
70 system - which means we get to use the same code to manipulate them, the
74 (2) We want to be able to migrate unevictable pages between nodes for memory
75 defragmentation, workload management and memory hotplug. The linux kernel
77 lists. If we were to maintain pages elsewhere than on an LRU-like list,
83 The unevictable list does not differentiate between file-backed and anonymous,
84 swap-backed pages. This differentiation is only important while the pages are,
87 The unevictable list benefits from the "arrayification" of the per-zone LRU
97 Memory Control Group Interaction
98 --------------------------------
100 The unevictable LRU facility interacts with the memory control group [aka
101 memory controller; see Documentation/admin-guide/cgroup-v1/memory.rst] by extending the
104 The memory controller data structure automatically gets a per-zone unevictable
105 list as a result of the "arrayification" of the per-zone LRU lists (one per
106 lru_list enum element). The memory controller tracks the movement of pages to
109 When a memory control group comes under memory pressure, the controller will
119 the control group may not fit into the available memory. This can cause
120 the control group to thrash or to OOM-kill tasks.
126 ----------------------------------
155 ensure they're in memory.
158 amount of unevictable memory marked by i915 driver is roughly the bounded
163 ---------------------------
189 --------------------------------------
201 There may be situations where a page is mapped into a VM_LOCKED VMA, but the
208 using putback_lru_page() - the inverse operation to isolate_lru_page() - after
228 -------
245 mapped the page. More on this below.
249 ----------------
251 mlocked pages - pages mapped into a VM_LOCKED VMA - are a class of unevictable
252 pages. When such a page has been "noticed" by the memory management subsystem,
257 the LRU. Such pages can be "noticed" by memory management in several places:
278 (1) mapped in a range unlocked via the munlock()/munlockall() system calls;
290 ---------------------------------------
295 is used for both mlocking and munlocking a range of memory. A call to mlock()
297 treated as a no-op, and mlock_fixup() simply returns.
306 Note that the VMA being mlocked might be mapped with PROT_NONE. In this case,
316 In the worst case, this will result in a page mapped in a VM_LOCKED VMA
330 back the page - by calling putback_lru_page() - which will notice that the page
337 ----------------------
347 2) VMAs mapping hugetlbfs page are already effectively pinned into memory. We
349 prior behavior of mlock() - before the unevictable/mlock changes -
367 -------------------------------------------
369 The munlock() and munlockall() system calls are handled by the same functions -
370 do_mlock[all]() - as the mlock() and mlockall() system calls with the unlock vs
379 populate_vma_page_range() - the same function used to mlock a VMA range -
386 fetching the pages - all of which should be resident as a result of previous
396 the page is mapped by other VM_LOCKED VMAs.
413 -----------------------
430 after dropping the page lock. The "unneeded" page - old page on success, new
431 page on failure - will be freed when the reference count held by the migration
438 ------------------------
442 this behavior (see Documentation/admin-guide/sysctl/vm.rst). Once scanning of the
448 -------------------------------
460 We handle this by keeping PTE-mapped huge pages on normal LRU lists: the
463 This way the huge page is accessible for vmscan. Under memory pressure the
470 -------------------------------------
473 that a region of memory be mlocked supplying the MAP_LOCKED flag to the mmap()
477 area will still have properties of the locked area - aka. pages will not get
478 swapped out - but major page faults to fault memory in might still happen.
482 in the newly mapped memory being mlocked. Before the unevictable/mlock
486 To mlock a range of memory under the unevictable/mlock infrastructure, the
490 The callers of populate_vma_page_range() will have already added the memory range
493 callers then subtract a non-negative return value from the task's locked_vm. A
494 negative return value represent an error - for example, from get_user_pages()
496 memory range accounted as locked_vm, as the protections could be changed later
501 -------------------------------------------
503 When unmapping an mlocked region of memory, whether by an explicit call to
509 To munlock a range of memory under the unevictable/mlock infrastructure, the
518 for the VMA's memory range and munlock_vma_page() each resident page mapped by
524 --------------
526 Pages can, of course, be mapped into multiple VMAs. Some of these VMAs may
527 have VM_LOCKED flag set. It is possible for a page mapped into one or more
538 functions handle anonymous and mapped file and KSM pages, as these types of
555 ---------------------------------
558 [!] TODO/FIXME: a better name might be page_mlocked() - analogous to the
563 page, it needs to determine whether or not the page is mapped by any
569 mapped file and KSM pages with a flag argument specifying unlock versus unmap
573 undoes the pre-clearing of the page's PG_mlocked done by munlock_vma_page.
576 reverse map to determine that a page is NOT mapped into any VM_LOCKED VMA.
584 -------------------------------
586 shrink_active_list() culls any obviously unevictable pages - i.e.
587 !page_evictable(page) - diverting these to the unevictable list.
590 set - otherwise they would be on the unevictable list and shrink_active_list
597 (2) SHM_LOCK'd shared memory pages. shmctl(SHM_LOCK) does not attempt to
598 allocate or fault in the pages in the shared memory region. This happens
609 after shrink_active_list() had moved them to the inactive list, or pages mapped
615 encounter for similar reason to shrink_inactive_list(). Pages mapped into